Rob, thank you for the idea! We have used it before on other problems, though I had no idea there are libraries made to address this problem (I used my own solution). Thanks for the link, the projects there look very promising.
However, the way I see it, this approach is only useful if you want to distribute the load across other machines or across time (so that major work is done when the load is lower). In our case the application runs on several machines already and we can add more if necessary. Also, the work MUST be done at about the time the request was issued, so it cannot wait 10 minutes when the load might be lower. I think in that case the only solution is to compute when we get the requests, with the possible optimization of returning the output before all work is done. Or am I mistaken?
When the requests start getting slow (because they are waiting for other requests to finish) we can just add more machines (or enlarge max. number of PHP processes limit first ;) and we're done.
So, still searching for the culprit that doesn't close the connection to browser when Content-Length is reached... :) Any ideas?
Or is this a FastCGI issue? Is there any other way to use PHP with NginX?
Thanks again!