I have tried to figure out how to get best out of Nginx and php-fpm on concurrent connections.
I was hoping to see that my setup could perform concurrent connections near 300 as numtcpsockets are set to 360. But I am just getting ~150 concurrent reliable.
On my VPS slice there are this kind of limits:
memory 384, no swap
/proc/user_beancounters -> limits
numtcpsock 360
numothersock 360
cat /proc/sys/net/core/somaxconn 128
I have tried to push the limit from another vps slice on same host to see how far I get.
eg. ab -n 1000 -c 100 http://mydomain.com/test.php/
Then raising the amount of concurrent connection to see when failing starts.
I first started using fastcgi with tcp socket but they are soon out of them as incoming connection to server also uses them.
Next tried with unix socket, but somaxconn limit hits near 128. So changed nginx fastcgi_pass to use upstream and changed php-fpm to use two workers to get more sockets in to use.
Unix connection / accept pair seems to eat two socket so I managed to get a bit over 150 concurrent connections depend on how many other unix sockets is already in use. So I changed those worker backlogs to use 75 max. So 150 is max limit I can get.
Performance is good with this setup but the php-fpm backlog is a bit worrying... eg if server has more load on it and I get a some pikes on traffic I would not like to see 502 so easily... more delay on serving pages could be ok then.
Is there some way to buffer fastcgi request on Nginx side if running out of unix / tcp sockets?
Is there something else to do to increase concurrent php-handling capability?
Br.
ElToro