On Jul 20, 2013, at 5:02 , momyc wrote:
> You clearly do not understand what the biggest FastCGI connection
> multiplexing advantage is. It makes it possible to use much less TCP
> connections (read "less ports"). Each TCP connection requires separate port
> and "local" TCP connection requires two ports. Add ports used by
> browser-to-Web-server connections and you'll see the whole picture. Even if
> Unix-sockets are used between Web-server and FastCGI-server there is an
> advantage in using connection multiplexing - less used file descriptors.
>
> FastCGI connection multiplexing could be great tool for beating C10K
> problem. And long-polling HTTP-requests would benefit from connection
> multiplexing even more.
The main issue with FastCGI connection multiplexing is lack of flow control.
Suppose a client stalls but a FastCGI backend continues to send data to it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections.
--
Igor Sysoev
http://nginx.com/services.html
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx