> Yes, but it is useless to buffer a long polling connection in a file. Buffering of some data on Web-server is fine as long as client receives whatever server has sent or client gets closed connection. If sending is not possible after buffers are full dropping client connection and aborting request is not a problem. Problems like that should be dealt with on higher level of abstraction.by momyc - Nginx Mailing List - English
> it is useless to buffer a long polling connection in a file. For Nginx there is no any difference between long-polling or other request. It would't even know. All it should care is how much to buffer and for how long to keep those buffers until droping them and aborting request. I do not see any technical problem here.by momyc - Nginx Mailing List - English
"abort backend" meant "abort request"by momyc - Nginx Mailing List - English
What do you mean by "stop readning"? Oh, you just stop checking if anything is ready for reading. I see. Well, this is rude flow control I'd say. Proxied server could unexpectedly drop connection because it would think Nginx is dead. There is a nice feature I don't remember how exactly it's called when some content could be buffered on Nginx (in proxy mode) and there is strict limit oby momyc - Nginx Mailing List - English
If it's time to close backend connection in non-multiplexed configuration just send FCGI_ABORT_REQUEST for that particular request, and start dropping records for that request received from the backend. Please shoot me any other questions about problems with implementing that feature.by momyc - Nginx Mailing List - English
What proxy module does in that case? You said earlier HTTP lacks flow conrol too. So what is the difference?by momyc - Nginx Mailing List - English
Well, there is supposed to be one FCGI_REQUEST_COMPLETE set in reply to FCGI_ABORT_REQUEST but it can be ignored in this particular case. I can see Nginx drops connections before receiving final FCGI_REQUEST_COMPLETE at the end of normal request processing in some cases. And that's something about running out of file descriptors.by momyc - Nginx Mailing List - English
And, possible 3) if there is no other requests for that connection, just close it like it never existedby momyc - Nginx Mailing List - English
It's my next task to implement connection multiplexing feature in Nginx's FastCGI module. I haven't looked at recent sources yet and I am not familiar with Nginx architecture so if you could give me some pointers on where I could to start it would be great. Sure thing anything I produce would be available for merging with main Nginx sources.by momyc - Nginx Mailing List - English
Actually 2) is natural since there is supposed to be de-multiplexer on Nginx side and it should know where to dispatch the record received from backendby momyc - Nginx Mailing List - English
OK, it probably closes connection to backend server. Well, in case of multiplexed FastCGI Nginx should do two things: 1) send FCGI_ABORT_REQUEST to backend for given request 2) start dropping records for given request if it still receives records from backend for given requestby momyc - Nginx Mailing List - English
> The main issue with FastCGI connection multiplexing is lack of flow control. Suppose a client stalls but a FastCGI backend continues to send data to it. At some point nginx should say the backend to stop sending to the client but the only way to do it is just to close all multiplexed connections The FastCGI spec has some fuzzy points. This one is easy. What Nginx does in case client staby momyc - Nginx Mailing List - English
You clearly... err. > 32K simultaneous active connections to the same service on a single machine? I suspect the bottleneck is somewhere else... I don't know what exactly "service" means in context of our conversation but if that means server then I did not say that everything should be handled by single FastCGI server. I said single Nginx server can easily dispatch thousands oby momyc - Nginx Mailing List - English
Funny thing is that resistance to implement that feature is so dence that it feels like its about breaking compatibility. It is all about more complete protocol specification implementation without any penalties beside making some internal changes.by momyc - Nginx Mailing List - English
Many projects would kill for 100% performance or scalability gain.by momyc - Nginx Mailing List - English
Another scenario. Consider application that takes few seconds to process single request. In non-multiplexing mode we're still limited to roughly 32K simultaneous requests even though we could install enough backend servers to handle 64K such requests per second. Now, imagine we can use FastCGI connection multiplexing. It could be just single connection per backend. And, again, we are able to seby momyc - Nginx Mailing List - English
Consider Comet application (aka long-polling Ajax requests). There is no CPU-load since most of the time application just waits for some event to happen and nothing is being transmitted. Something like chat or stock monitoring Web application used by thousands of users simultaneously. Every request (one socket/one port) would generate one connection to backend (another socket/port). So each reqby momyc - Nginx Mailing List - English
You clearly do not understand what the biggest FastCGI connection multiplexing advantage is. It makes it possible to use much less TCP connections (read "less ports"). Each TCP connection requires separate port and "local" TCP connection requires two ports. Add ports used by browser-to-Web-server connections and you'll see the whole picture. Even if Unix-sockets are used betweeby momyc - Nginx Mailing List - English