Welcome! Log In Create A New Profile

Advanced

Re: Balancing NGINX reverse proxy

March 05, 2017 06:02PM
Hi

Firstly, I am fairly new to nginx.


From what I understand you have a standard sort of setup.


2 nodes (vm's) with haproxy, allowing nginx to be active / passive.

You have SSL requests which once nginx terminates the SSL, it injects a
security header / token and then I presume it passes this on to a back end,
i presume that the nginx to application server is non SSL.

You are having performance issue with the SSL + header inject part, which
seems to be limiting you to approx 60req per sec before you hit 100% cpu..
This seems very very low to me looking at my prod setup - similar to yours
I am seeing 600 connections and req/s ranging from 8-400 / sec. all whilst
the cpu stay very very low.

We try and use long lived TCP / SSL sessions, but we also use a thick
client as well so have more control.

Not sure about KEMP loadmaster.

What I describe to you was our potential plans for when the load gets too
much on the active/passive setup.

It would allow you to take your 60 session ? and distributed it between 2
or upto 16 (I believe this is the max for pacemaker). an active / active
setup

The 2 node setup would be the same as yours


router -> vlan with the 2 nodes > Node A would only process node a data and
node B would only process node b data. This in theory would have the
potential to double your req / sec.



Alex


On 3 March 2017 at 19:33, polder_trash <nginx-forum@forum.nginx.org> wrote:

> Alexsamad,
> I might not have been clear, allow me to try again:
>
> * currently 2 NGINX revproxy nodes, 1 active the other on standby in case
> node 1 fails.
> * Since I am injecting an authentication header into the request, the HTTPS
> request has to be offloaded at the node and introduces additional load
> compared to injecting into non-encrypted requests.
> * Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent
> requests expected to more than double, so revproxy will be bottleneck.
>
> The NGINX revproxies run as a VM and I can ramp up the machine specs a
> little bit, but I do not expect this to completely solve the issue here.
> Therefore I am looking for some method of spreading the requests over
> multiple backend revproxies, without the load balancer frontend having to
> deal with SSL offloading.
>
> From the KEMP LoadMaster documentation I found that this technique is
> called
> SSL Passthrough. I am currently looking if that is also supported by NGINX.
>
> What do you think? Will this solve my issue? Am I on the wrong track?
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,272713,272729#msg-272729
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Balancing NGINX reverse proxy

polder_trash March 02, 2017 09:40AM

Re: Balancing NGINX reverse proxy

alexsamad March 02, 2017 05:08PM

Re: Balancing NGINX reverse proxy

polder_trash March 03, 2017 03:33AM

Re: Balancing NGINX reverse proxy

alexsamad March 05, 2017 06:02PM

Re: Balancing NGINX reverse proxy

pbooth March 05, 2017 08:16PM

Re: Balancing NGINX reverse proxy

GreenGecko March 05, 2017 10:12PM

Re: Balancing NGINX reverse proxy

polder_trash March 20, 2017 08:11AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 111
Record Number of Users: 6 on February 13, 2018
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready