Welcome! Log In Create A New Profile

Advanced

Re: Reverse Proxy with 500k connections

Maxim Konovalov
March 08, 2017 06:18AM
On 3/8/17 3:57 AM, Tolga Ceylan wrote:
> Of course, with split_clients, you are at the mercy of the hashing and
> hope that this distribution will spread work
> evenly based on incoming client address space and the duration of
> these connections, so you might run into
> the limits despite having enough port capacity. More importantly, in
> case of failures, your clients will see
> errors, since nginx will not retry (and even if it did, the hashing
> will land on the same exhausted port/ip set.)
>
IP_BIND_ADDRESS_NO_PORT in fresh linux kernels made the trick for
nginx. This is basically why we added it not that recently.

You can find patches that work around without this feature though.

> Upstream {} with multiple backends approach is a bit more robust as if
> the ports are ever exhausted, nginx
> can try the next upstream. And you can try to control this further by
> using least_conn backend selection.
>
>
> On Tue, Mar 7, 2017 at 3:39 PM, Andrei Belov <defan@nginx.com> wrote:
>> Yes, split_clients solution fits perfectly in the described use case.
>>
>> Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket
>> option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23) which
>> may be helpful as well.
>>
>> Quote from [1]:
>>
>> [..]
>> Add IP_BIND_ADDRESS_NO_PORT to overcome bind(0) limitations: When an
>> application needs to force a source IP on an active TCP socket it has to use
>> bind(IP, port=x). As most applications do not want to deal with already used
>> ports, x is often set to 0, meaning the kernel is in charge to find an
>> available port. But kernel does not know yet if this socket is going to be a
>> listener or be connected. This patch adds a new SOL_IP socket option, asking
>> kernel to ignore the 0 port provided by application in bind(IP, port=0) and
>> only remember the given IP address. The port will be automatically chosen at
>> connect() time, in a way that allows sharing a source port as long as the
>> 4-tuples are unique.
>> [..]
>>
>>
>> [1] https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a
>> [2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d
>>
>>
>>> On 08 Mar 2017, at 01:10, Tolga Ceylan <tolga.ceylan@gmail.com> wrote:
>>>
>>> How about using
>>>
>>> split_clients "${remote_addr}AAA" $proxy_ip {
>>> 10% 192.168.1.10;
>>> 10% 192.168.1.11;
>>> ...
>>> * 192.168.1.19;
>>> }
>>>
>>> proxy_bind $proxy_ip;
>>>
>>> where $proxy_ip is populated via split clients module to spread the
>>> traffic to 10 internal IPs.
>>>
>>> or add 10 new listener ports (or ips) to your backend server instead,
>>> (and perhaps use least connected load balancing) in upstream {} set of
>>> 10 backends. eg:
>>>
>>> upstream backend {
>>> least_conn;
>>> server 192.168.1.21:443;
>>> server 192.168.1.21:444;
>>> server 192.168.1.21:445;
>>> server 192.168.1.21:446;
>>> server 192.168.1.21:447;
>>> server 192.168.1.21:448;
>>> server 192.168.1.21:449;
>>> server 192.168.1.21:450;
>>> server 192.168.1.21:451;
>>> server 192.168.1.21:452;
>>> }
>>>
>>>
>>>
>>>
>>> On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner <rainer@ultra-secure.de> wrote:
>>>>
>>>> Am 07.03.2017 um 22:12 schrieb Nelson Marcos <nelsonmarcos@gmail.com>:
>>>>
>>>> Do you really need to use different source ips or it's a solution that you
>>>> picked?
>>>>
>>>> Also, is it a option to set the keepalive option in your upstream configure
>>>> section?
>>>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
>>>>
>>>>
>>>>
>>>>
>>>> I’m not sure if you can proxy web socket connections like http-connections.
>>>>
>>>> After all, they are persistent (hence the large number of connections).
>>>>
>>>> Why can’t you (OP) do the upgrade to 1.10? I thought it’s the only
>>>> „supported" version anyway?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>> _______________________________________________
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


--
Maxim Konovalov
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Reverse Proxy with 500k connections

larsg March 07, 2017 02:50PM

Re: Reverse Proxy with 500k connections

Nelson Marcos March 07, 2017 04:14PM

Re: Reverse Proxy with 500k connections

Rainer Duffner March 07, 2017 04:24PM

Re: Reverse Proxy with 500k connections

Tolga Ceylan March 07, 2017 05:12PM

Re: Reverse Proxy with 500k connections

Andrei Belov March 07, 2017 06:40PM

Re: Reverse Proxy with 500k connections

Tolga Ceylan March 07, 2017 07:58PM

Re: Reverse Proxy with 500k connections

Maxim Konovalov March 08, 2017 06:18AM

Re: Reverse Proxy with 500k connections

larsg March 09, 2017 09:52AM

Re: Reverse Proxy with 500k connections

larsg March 09, 2017 12:20PM

RE: Reverse Proxy with 500k connections

Reinis Rozitis March 09, 2017 12:10PM

Re: RE: Reverse Proxy with 500k connections

larsg March 09, 2017 01:10PM

Re: Reverse Proxy with 500k connections

Konstantin Pavlov March 09, 2017 02:26PM

Re: Reverse Proxy with 500k connections

larsg March 13, 2017 10:22AM

Re: Reverse Proxy with 500k connections

foxgab July 14, 2017 02:15AM

Re: Reverse Proxy with 500k connections

Maxim Konovalov March 08, 2017 06:20AM

Re: Reverse Proxy with 500k connections

Maxim Konovalov March 09, 2017 04:34AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 221
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 500 on July 15, 2024
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready