Welcome! Log In Create A New Profile

Advanced

答复: problems when use fastcgi_pass to deliver request to backend

林谡
June 02, 2015 12:06AM
Thanks for your reply.
Let me make the question more clear.
Assume that backend receive 10 k request per second, backend can give response within 10 ms, then 100 connection is needed,
If backend for some reason response over 1s, 10k connections have to be created within 1s, I don't think there exists such a backend can handle this, even it is nginx.
We surely can make flow control on backend, but It's a sorrow most of these requests have to be dropped.
If multiplexing is provided, things can go like this:
If backend for some reason response over 1s, requests can flow into backend as usual by 1 or more connections, backend can queued requests(usually 50k) as much as possible under the control of its own memory health permission for later handling. As we can see, a 5 seconds slow down can be overcame smoothly by multiplexing, but without multiplexing most of client receive error response.
in real production world, a sudden client request summit or a backend sudden slow down often happens, multiplexing is very beneficial for backend to do work smoothly and provide more output. I really think nginx need fastcgi multiplexing .

-----邮件原件-----
发件人: Alexey Ivanov [mailto:savetherbtz@gmail.com]
发送时间: 2015年6月1日 8:46
收件人: nginx-devel@nginx.org
抄送: Sergey Brester; 武志国; 高磊; 鞠进步; RCS-Tech; 段牧
主题: Re: problems when use fastcgi_pass to deliver request to backend

If your backend can’t handle 10k connections then you should limit them there. Forwarding requests to the backend that can not handle the request is generally a bad idea[1] an it is usually better to fail the request or make them wait for a available backend on proxy itself.

Nginx can retry requests if it gets timeout or RST (connection refused) from a backend[2]. That combined with tcp_abort_on_overflow[3], listen(2) backlog argument and maybe some application limits should be enough to fix your problem.

[1] http://genius.com/James-somers-herokus-ugly-secret-annotated
[2] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream
[3] https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt

> On May 29, 2015, at 3:48 AM, 林谡 <linsu@feinno.com> wrote:
>
> Thanks for reply,
> I had read all the Discussions you suggested.
> The main reason is that multiplexing seems useless when using “keep alive” feature and backend is fast enough.
> It’s true! But real world is more sophisticated.
>
> Our system is very big, and over 5k machines are providing services. In Our system, nginx proxy_pass http request to http applications by using “keep alive”, it works well, over 10 k requests were processed per second and tcp connections between nginx and backend were blow 100. But, sometimes, response time become 1-10s or more for a while, because maybe a db server fail over or network shrink. Over 10k tcp connection need to be setup as we see.
> our backend is written by java, connections cannot be setup all a sudden, and memory needed is big , GC collections became bottleneck, GC keep on working even if db server or network resumed to normal, and backend server did not work orderly any more, I observed these things several times.
>
> If multiplexing, no more connections are needed and memory needed is far small under such a circumstance. We use multiplexing everywhere in our java applications, It can prove my idea.
>
> Nginx is needed for sure for client http access, so I study fastcgi to solve above problem, but nginx does support fastcgi multiplexing, which can trigger the same problem.
>
> As a conclusion, a big production system really need that nginx pass request to backend by multiplexing. Can you make nginx developing team work on it?
>
>
>
> 发件人: Sergey Brester [mailto:serg.brester@sebres.de]
> 发送时间: 2015年5月29日 16:40
> 收件人: nginx-devel@nginx.org
> 抄送: 林谡
> 主题: Re: 答复: problems when use fastcgi_pass to deliver request to backend
>
> Hi,
>
> It's called fastcgi multiplexing and nginx currently does not implement that (and I don't know .
>
> There were already several discussions about that, so read here, please.
>
> Short, very fast fastcgi processing may be implemented without multiplexing (should be event-driven also).
>
> Regards,
> sebres.
>
>
>
> Am 29.05.2015 09:58, schrieb 林谡:
>
>
> /* we support the single request per connection */
> 2573
>
> 2574
> case ngx_http_fastcgi_st_request_id_hi:
> 2575
> if (ch != 0) {
> 2576
> ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
> 2577
> "upstream sent unexpected FastCGI "
> 2578
> "request id high byte: %d", ch);
> 2579
> return NGX_ERROR;
> 2580
> }
> 2581
> state = ngx_http_fastcgi_st_request_id_lo;
> 2582
> break;
> 2583
>
> 2584
> case ngx_http_fastcgi_st_request_id_lo:
> 2585
> if (ch != 1) {
> 2586
> ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
> 2587
> "upstream sent unexpected FastCGI "
> 2588
> "request id low byte: %d", ch);
> 2589
> return NGX_ERROR;
> 2590
> }
> 2591
> state = ngx_http_fastcgi_st_content_length_hi;
> 2592
> break;
> By reading source code, I saw the reason , so can nginx support multi request per connection in future?
>
> 发件人: 林谡
> 发送时间: 2015年5月29日 11:37
> 收件人: 'nginx-devel@nginx.org'
> 主题: problems when use fastcgi_pass to deliver request to backend
>
> Hi,
> I write a fastcgi server and use nginx to pass request to my server. It works till now.
> But I find a problem. Nginx always set requestId = 1 when sending fastcgi record.
> I was a little upset for this, cause according to fastcgi protocol, web server can send fastcgi records belonging to different request simultaneously, and requestIds are different and keep unique. I really need this feature, because requests can be handled simultaneously just over one connetion.
> Can I find a way out?
>
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
Subject Author Views Posted

答复: problems when use fastcgi_pass to deliver request to backend

林谡 748 May 29, 2015 04:00AM

Re: 答复: problems when use fastcgi_pass to deliver request to backend

Sergey Brester 366 May 29, 2015 04:42AM

答复: 答复: problems when use fastcgi_pass to deliver request to backend

林谡 347 May 29, 2015 06:50AM

Re: problems when use fastcgi_pass to deliver request to backend

Alexey Ivanov 330 May 31, 2015 08:48PM

答复: problems when use fastcgi_pass to deliver request to backend

林谡 367 June 02, 2015 12:06AM



Sorry, you do not have permission to post/reply in this forum.

Online Users

Guests: 262
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready