Hello all. I have done some work setting up NGINX as a proxy to a FastCGI PHP (PHP-FPM) backend and was hoping to get some clarification as to my understanding of how concurrent connections are handled.
My environment is a 2 CPU Centox 5.4 Linux machin with nginx/0.8.35
I have worker_processes set to 2
My question is when a request is passed to a backend fastgi what exactly blocks waiting for a response? Say I have 20 backend fastcgi processes and they are all were executing some big long piece of work will the nginx master processes utilise all 20 100%
Also what are the blocking consequences for something like this....
upstream python-backend {
server unix:/tmp/back_end_server.sock;
server unix:/tmp/back_end_server.sock;
server unix:/tmp/back_end_server.sock;
server unix:/tmp/back_end_server.sock;
server unix:/tmp/back_end_server.sock;
}
i.e., the same unix socket but multiple entries.