Welcome! Log In Create A New Profile

Advanced

php-fpm processes max out even though CPU is not that high

Posted by Jim Hackett 
Jim Hackett
php-fpm processes max out even though CPU is not that high
January 23, 2013 06:40PM
My company is looking at a combination of PHP-FPM/APC/nginx to respond to a
very large number of requests, very quickly, globally. In general we see a
huge performance increase over lighttpd, however once we get to around 200
req/sec things get wacky. All of the idle processes are taken up and we
start responding with 502 errors. We've increased shared memory, toyed
around with nginx/php-fpm config to no avail. Until we high around 200
req/sec everything looks really great, CPU levels are fine, memory is fine,
but once we hit that mark we get a lot of socket errors in the nginx error
log and many many 502 error. Any help would be GREATLY appreciated. Below I
have included some of our config, please let me know what else you need:

php-fpm.conf

[global]
pid = /var/run/php-fpm/php-fpm.pid
error_log = log/php-fpm_error.log
log_level = debug
;emergency_restart_threshold = 0
;emergency_restart_interval = 0
;process_control_timeout = 0

process.max = 500

include=/etc/php-fpm.d/*.conf

[www]

user = nobody
group = nobody
listen = /tmp/pool1.socket
listen.allowed_clients = 127.0.0.1
slowlog = /var/log/php-fpm/www-slow.log
request_slowlog_timeout = 1
catch_workers_output = yes
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
rlimit_files = 50000
request_terminate_timeout = 30s

pm = ondemand
pm.max_children = 300
pm.process_idle_timeout = 2s
pm.max_requests = 5000

pm.status_path = /monitor/status.fpm
listen.backlog=0

nginx.conf

user nobody nobody;
worker_processes 10;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}


http {
include mime.types;
default_type application/octet-stream;
index index.html index.php;
log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"
$request_time';
error_log /var/log/nginx/error.log;
access_log off;
sendfile on;

keepalive_timeout 0s;
fastcgi_read_timeout 360s;

upstream phpfarm {
server unix:/tmp/pool1.socket weight=100 max_fails=5 fail_timeout=5;
}

server {
listen 80;
listen 443 default_server ssl;
root /srv/www/;
fastcgi_busy_buffers_size 256k;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 128k;
fastcgi_temp_file_write_size 256k;
proxy_buffering off;
tcp_nopush on;
tcp_nodelay on;
auth_digest_user_file user.passwd;
auth_digest_expires 300s;
location /monitor {
auth_digest 'Authorized users only';
location /monitor/status {
extended_status on;
access_log off;
allow all;
}
location /monitor/status.fpm {
fastcgi_pass phpfarm;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
}
location /monitor/status/php-fpm {
alias /usr/share/fpm/;
allow all;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass phpfarm;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
}
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_pass phpfarm;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
}

location = /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
}
}

--
Re: php-fpm processes max out even though CPU is not that high
January 23, 2013 07:16PM
Hi Jim,
So a few questions/suggestions.

1. In our experiences here, the culprit here is usually long standing
database queries. When you hit 200 requests/sec, do you also see a lot of
processes in mysql or your database?
2. Change your default value for max socket connections in /etc/sysctl.conf
to net.core.somaxconn , by default it's 1024.. you may need it to be
higher. We use net.core.somaxconn = 4096

But, check #1 first. Because you'll reach that 4096 pretty quickly if your
DB is blocking all the requests.

I'm willing to bet $1.42, it's the db, and you see a ton of processes, but
not a lot of CPU usage. :)




On Wed, Jan 23, 2013 at 4:10 PM, Jim Hackett <jimh@exelate.com> wrote:

> My company is looking at a combination of PHP-FPM/APC/nginx to respond to
> a very large number of requests, very quickly, globally. In general we see
> a huge performance increase over lighttpd, however once we get to around
> 200 req/sec things get wacky. All of the idle processes are taken up and we
> start responding with 502 errors. We've increased shared memory, toyed
> around with nginx/php-fpm config to no avail. Until we high around 200
> req/sec everything looks really great, CPU levels are fine, memory is fine,
> but once we hit that mark we get a lot of socket errors in the nginx error
> log and many many 502 error. Any help would be GREATLY appreciated. Below I
> have included some of our config, please let me know what else you need:
>
> php-fpm.conf
>
> [global]
> pid = /var/run/php-fpm/php-fpm.pid
> error_log = log/php-fpm_error.log
> log_level = debug
> ;emergency_restart_threshold = 0
> ;emergency_restart_interval = 0
> ;process_control_timeout = 0
>
> process.max = 500
>
> include=/etc/php-fpm.d/*.conf
>
> [www]
>
> user = nobody
> group = nobody
> listen = /tmp/pool1.socket
> listen.allowed_clients = 127.0.0.1
> slowlog = /var/log/php-fpm/www-slow.log
> request_slowlog_timeout = 1
> catch_workers_output = yes
> php_admin_value[error_log] = /var/log/php-fpm/www-error.log
> php_admin_flag[log_errors] = on
> rlimit_files = 50000
> request_terminate_timeout = 30s
>
> pm = ondemand
> pm.max_children = 300
> pm.process_idle_timeout = 2s
> pm.max_requests = 5000
>
> pm.status_path = /monitor/status.fpm
> listen.backlog=0
>
> nginx.conf
>
> user nobody nobody;
> worker_processes 10;
> pid /var/run/nginx.pid;
>
> events {
> worker_connections 1024;
> }
>
>
> http {
> include mime.types;
> default_type application/octet-stream;
> index index.html index.php;
> log_format main '$remote_addr - $remote_user [$time_local]
> "$request" '
> '$status $body_bytes_sent "$http_referer" '
> '"$http_user_agent" "$http_x_forwarded_for"
> $request_time';
> error_log /var/log/nginx/error.log;
> access_log off;
> sendfile on;
>
> keepalive_timeout 0s;
> fastcgi_read_timeout 360s;
>
> upstream phpfarm {
> server unix:/tmp/pool1.socket weight=100 max_fails=5 fail_timeout=5;
> }
>
> server {
> listen 80;
> listen 443 default_server ssl;
> root /srv/www/;
> fastcgi_busy_buffers_size 256k;
> fastcgi_buffers 4 256k;
> fastcgi_buffer_size 128k;
> fastcgi_temp_file_write_size 256k;
> proxy_buffering off;
> tcp_nopush on;
> tcp_nodelay on;
> auth_digest_user_file user.passwd;
> auth_digest_expires 300s;
> location /monitor {
> auth_digest 'Authorized users only';
> location /monitor/status {
> extended_status on;
> access_log off;
> allow all;
> }
> location /monitor/status.fpm {
> fastcgi_pass phpfarm;
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_script_name;
> include fastcgi_params;
> }
> location /monitor/status/php-fpm {
> alias /usr/share/fpm/;
> allow all;
> }
> location ~ \.php$ {
> try_files $uri =404;
> fastcgi_pass phpfarm;
> fastcgi_param SCRIPT_FILENAME
> $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_script_name;
> include fastcgi_params;
> }
> }
>
> location ~ \.php$ {
> try_files $uri =404;
> fastcgi_pass phpfarm;
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_script_name;
> include fastcgi_params;
> }
>
> location = /favicon.ico {
> return 204;
> access_log off;
> log_not_found off;
> }
> }
> }
>
> --
>
>
>
>



--
Vid Luther
CEO and Founder
ZippyKid
Managed Wordpress Hosting
http://zippykid.com/
210-789-0369

--
Maciej Lisiewski
Re: php-fpm processes max out even though CPU is not that high
January 23, 2013 08:10PM
> My company is looking at a combination of PHP-FPM/APC/nginx to respond
> to a very large number of requests, very quickly, globally. In general
> we see a huge performance increase over lighttpd, however once we get
> to around 200 req/sec things get wacky. All of the idle processes are
> taken up and we start responding with 502 errors. We've increased
> shared memory, toyed around with nginx/php-fpm config to no avail.
> Until we high around 200 req/sec everything looks really great, CPU
> levels are fine, memory is fine, but once we hit that mark we get a
> lot of socket errors in the nginx error log and many many 502 error.
> Any help would be GREATLY appreciated. Below I have included some of
> our config, please let me know what else you need:
>
200 req/sec is about the limit of file based sessions (unless you're
using SSD). It's similar with database sessions unless you don't sync on
every change (not recommended unless you have battery backed controller).
Try in memory sessions (memcached) to test if that's the problem and if
it is switch to hybrid memory + db for a combination of speed and
persistent storage.

--
Maciej Lisiewski

--
Alexey A. Rybak
Re: php-fpm processes max out even though CPU is not that high
January 24, 2013 03:38AM
Hi!

To determine what exactly causes the problem you should figure out
what's going with/inside PHP workers.
All the suggestions made (disk - db, sessions) are reasonable, but you
have to know for sure.
AFAIR 502 as bad gateway on nginx side can be:
* workers just don't work at all for some reason (like fpm is not
started, or global shared memory crash et cetera). I don't think it's
your case.
* workers stall getting data from other resource, so all you workers
wait in some IO and there's no worker to serve new request.

So assuming PHP workers just stall getting data from somewhere you
have to figure out exactly what operation time is increased at this
threshold.
You can either do it with manual timers and manual aggregation or
pinba. Possibly just looking in PHP error log can help: sometimes all
the workers stall just connecting to some resource and wait for a
connect timeout, so your PHP log will be full of corresponding errors.
But debugging with timers will definitely give you much more
information.

Also having dedicated box for just one project "pm = ondemand" is not
the best choice, "static" is better.

Hope this helps.

On Thu, Jan 24, 2013 at 2:10 AM, Jim Hackett <jimh@exelate.com> wrote:
> My company is looking at a combination of PHP-FPM/APC/nginx to respond to a
> very large number of requests, very quickly, globally. In general we see a
> huge performance increase over lighttpd, however once we get to around 200
> req/sec things get wacky. All of the idle processes are taken up and we
> start responding with 502 errors. We've increased shared memory, toyed
> around with nginx/php-fpm config to no avail. Until we high around 200
> req/sec everything looks really great, CPU levels are fine, memory is fine,
> but once we hit that mark we get a lot of socket errors in the nginx error
> log and many many 502 error. Any help would be GREATLY appreciated. Below I
> have included some of our config, please let me know what else you need:
>
> php-fpm.conf
>
> [global]
> pid = /var/run/php-fpm/php-fpm.pid
> error_log = log/php-fpm_error.log
> log_level = debug
> ;emergency_restart_threshold = 0
> ;emergency_restart_interval = 0
> ;process_control_timeout = 0
>
> process.max = 500
>
> include=/etc/php-fpm.d/*.conf
>
> [www]
>
> user = nobody
> group = nobody
> listen = /tmp/pool1.socket
> listen.allowed_clients = 127.0.0.1
> slowlog = /var/log/php-fpm/www-slow.log
> request_slowlog_timeout = 1
> catch_workers_output = yes
> php_admin_value[error_log] = /var/log/php-fpm/www-error.log
> php_admin_flag[log_errors] = on
> rlimit_files = 50000
> request_terminate_timeout = 30s
>
> pm = ondemand
> pm.max_children = 300
> pm.process_idle_timeout = 2s
> pm.max_requests = 5000
>
> pm.status_path = /monitor/status.fpm
> listen.backlog=0
>
> nginx.conf
>
> user nobody nobody;
> worker_processes 10;
> pid /var/run/nginx.pid;
>
> events {
> worker_connections 1024;
> }
>
>
> http {
> include mime.types;
> default_type application/octet-stream;
> index index.html index.php;
> log_format main '$remote_addr - $remote_user [$time_local]
> "$request" '
> '$status $body_bytes_sent "$http_referer" '
> '"$http_user_agent" "$http_x_forwarded_for"
> $request_time';
> error_log /var/log/nginx/error.log;
> access_log off;
> sendfile on;
>
> keepalive_timeout 0s;
> fastcgi_read_timeout 360s;
>
> upstream phpfarm {
> server unix:/tmp/pool1.socket weight=100 max_fails=5 fail_timeout=5;
> }
>
> server {
> listen 80;
> listen 443 default_server ssl;
> root /srv/www/;
> fastcgi_busy_buffers_size 256k;
> fastcgi_buffers 4 256k;
> fastcgi_buffer_size 128k;
> fastcgi_temp_file_write_size 256k;
> proxy_buffering off;
> tcp_nopush on;
> tcp_nodelay on;
> auth_digest_user_file user.passwd;
> auth_digest_expires 300s;
> location /monitor {
> auth_digest 'Authorized users only';
> location /monitor/status {
> extended_status on;
> access_log off;
> allow all;
> }
> location /monitor/status.fpm {
> fastcgi_pass phpfarm;
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_script_name;
> include fastcgi_params;
> }
> location /monitor/status/php-fpm {
> alias /usr/share/fpm/;
> allow all;
> }
> location ~ \.php$ {
> try_files $uri =404;
> fastcgi_pass phpfarm;
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_script_name;
> include fastcgi_params;
> }
> }
>
> location ~ \.php$ {
> try_files $uri =404;
> fastcgi_pass phpfarm;
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_script_name;
> include fastcgi_params;
> }
>
> location = /favicon.ico {
> return 204;
> access_log off;
> log_not_found off;
> }
> }
> }
>
> --
>
>
>


--

wbr,
Alexey Rybak
Badoo Development (badoo.com)

--
Jérôme Loyet
Re: php-fpm processes max out even though CPU is not that high
January 24, 2013 03:56AM
Hi there

Static pm is definitely the choice to make for high traffic websites.

To figure out the bottleneck you can also try the slow request feature
Le 24 janv. 2013 09:37, "Alexey A. Rybak" <alexey.rybak@gmail.com> a écrit :

> Hi!
>
> To determine what exactly causes the problem you should figure out
> what's going with/inside PHP workers.
> All the suggestions made (disk - db, sessions) are reasonable, but you
> have to know for sure.
> AFAIR 502 as bad gateway on nginx side can be:
> * workers just don't work at all for some reason (like fpm is not
> started, or global shared memory crash et cetera). I don't think it's
> your case.
> * workers stall getting data from other resource, so all you workers
> wait in some IO and there's no worker to serve new request.
>
> So assuming PHP workers just stall getting data from somewhere you
> have to figure out exactly what operation time is increased at this
> threshold.
> You can either do it with manual timers and manual aggregation or
> pinba. Possibly just looking in PHP error log can help: sometimes all
> the workers stall just connecting to some resource and wait for a
> connect timeout, so your PHP log will be full of corresponding errors.
> But debugging with timers will definitely give you much more
> information.
>
> Also having dedicated box for just one project "pm = ondemand" is not
> the best choice, "static" is better.
>
> Hope this helps.
>
> On Thu, Jan 24, 2013 at 2:10 AM, Jim Hackett <jimh@exelate.com> wrote:
> > My company is looking at a combination of PHP-FPM/APC/nginx to respond
> to a
> > very large number of requests, very quickly, globally. In general we see
> a
> > huge performance increase over lighttpd, however once we get to around
> 200
> > req/sec things get wacky. All of the idle processes are taken up and we
> > start responding with 502 errors. We've increased shared memory, toyed
> > around with nginx/php-fpm config to no avail. Until we high around 200
> > req/sec everything looks really great, CPU levels are fine, memory is
> fine,
> > but once we hit that mark we get a lot of socket errors in the nginx
> error
> > log and many many 502 error. Any help would be GREATLY appreciated.
> Below I
> > have included some of our config, please let me know what else you need:
> >
> > php-fpm.conf
> >
> > [global]
> > pid = /var/run/php-fpm/php-fpm.pid
> > error_log = log/php-fpm_error.log
> > log_level = debug
> > ;emergency_restart_threshold = 0
> > ;emergency_restart_interval = 0
> > ;process_control_timeout = 0
> >
> > process.max = 500
> >
> > include=/etc/php-fpm.d/*.conf
> >
> > [www]
> >
> > user = nobody
> > group = nobody
> > listen = /tmp/pool1.socket
> > listen.allowed_clients = 127.0.0.1
> > slowlog = /var/log/php-fpm/www-slow.log
> > request_slowlog_timeout = 1
> > catch_workers_output = yes
> > php_admin_value[error_log] = /var/log/php-fpm/www-error.log
> > php_admin_flag[log_errors] = on
> > rlimit_files = 50000
> > request_terminate_timeout = 30s
> >
> > pm = ondemand
> > pm.max_children = 300
> > pm.process_idle_timeout = 2s
> > pm.max_requests = 5000
> >
> > pm.status_path = /monitor/status.fpm
> > listen.backlog=0
> >
> > nginx.conf
> >
> > user nobody nobody;
> > worker_processes 10;
> > pid /var/run/nginx.pid;
> >
> > events {
> > worker_connections 1024;
> > }
> >
> >
> > http {
> > include mime.types;
> > default_type application/octet-stream;
> > index index.html index.php;
> > log_format main '$remote_addr - $remote_user [$time_local]
> > "$request" '
> > '$status $body_bytes_sent "$http_referer" '
> > '"$http_user_agent" "$http_x_forwarded_for"
> > $request_time';
> > error_log /var/log/nginx/error.log;
> > access_log off;
> > sendfile on;
> >
> > keepalive_timeout 0s;
> > fastcgi_read_timeout 360s;
> >
> > upstream phpfarm {
> > server unix:/tmp/pool1.socket weight=100 max_fails=5 fail_timeout=5;
> > }
> >
> > server {
> > listen 80;
> > listen 443 default_server ssl;
> > root /srv/www/;
> > fastcgi_busy_buffers_size 256k;
> > fastcgi_buffers 4 256k;
> > fastcgi_buffer_size 128k;
> > fastcgi_temp_file_write_size 256k;
> > proxy_buffering off;
> > tcp_nopush on;
> > tcp_nodelay on;
> > auth_digest_user_file user.passwd;
> > auth_digest_expires 300s;
> > location /monitor {
> > auth_digest 'Authorized users only';
> > location /monitor/status {
> > extended_status on;
> > access_log off;
> > allow all;
> > }
> > location /monitor/status.fpm {
> > fastcgi_pass phpfarm;
> > fastcgi_param SCRIPT_FILENAME
> $document_root$fastcgi_script_name;
> > fastcgi_param PATH_INFO $fastcgi_script_name;
> > include fastcgi_params;
> > }
> > location /monitor/status/php-fpm {
> > alias /usr/share/fpm/;
> > allow all;
> > }
> > location ~ \.php$ {
> > try_files $uri =404;
> > fastcgi_pass phpfarm;
> > fastcgi_param SCRIPT_FILENAME
> $document_root$fastcgi_script_name;
> > fastcgi_param PATH_INFO $fastcgi_script_name;
> > include fastcgi_params;
> > }
> > }
> >
> > location ~ \.php$ {
> > try_files $uri =404;
> > fastcgi_pass phpfarm;
> > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> > fastcgi_param PATH_INFO $fastcgi_script_name;
> > include fastcgi_params;
> > }
> >
> > location = /favicon.ico {
> > return 204;
> > access_log off;
> > log_not_found off;
> > }
> > }
> > }
> >
> > --
> >
> >
> >
>
>
> --
>
> wbr,
> Alexey Rybak
> Badoo Development (badoo.com)
>
> --
>
>
>
>

--
Re: php-fpm processes max out even though CPU is not that high
January 24, 2013 04:32AM
Hello

I was/am in similar situation where it works for a while then suddenly it
chokes with Nginx 502 for few minutes then it's fine even at higher users
per seconds than at choke time.

I discovered lately that we should put the listen.backlog = 4096 or bigger
usually the same as your sysctl somaxconn

you can set that like
echo 'net.core.somaxconn=4096' >> /etc/sysctl.conf
sysctl -p

Also use unix sockets if nginx and php-fpm are on the same box, they are
faster than TCP 127.0.0.1:9000 thing.

I changed these things like 4 days ago and I am clear of the problem so far..

Also set php-fpm slow logging, it's pretty useful I can see exact script
line and function that slow things down
request_terminate_timeout = 60s
request_slowlog_timeout = 30
slowlog = /var/log/php-fpm/www-slow.log

I have it like this


Make the php-fpm static as suggested above
Put the pm.max_requests = 500 so that it kills php threads faster and
prevent memory leaks. Better respawn than waste resources.

Also make sure mysql is not the cause, many times it is. Mine is on a
different box and never reported any issue, but I didn't manage to catch it
in the act, as it's a matter of minutes usually after 1AM (obviously :)).

Also play with timeouts, maybe increasing those for debug will pop-up the
real error in another log than php-fpm. As far as 502, don't bother
searching for it directly, it's not relevant. You can shut down mysql and
Nginx say 502. :P


---------------------------------------------------------------
Cristian Rusu
Web Developement & Electronic Publishing

======
Crilance.com
Crilance.blogspot.com


On Thu, Jan 24, 2013 at 10:54 AM, Jérôme Loyet <jerome@loyet.net> wrote:

> Hi there
>
> Static pm is definitely the choice to make for high traffic websites.
>
> To figure out the bottleneck you can also try the slow request feature
> Le 24 janv. 2013 09:37, "Alexey A. Rybak" <alexey.rybak@gmail.com> a
> écrit :
>
> Hi!
>>
>> To determine what exactly causes the problem you should figure out
>> what's going with/inside PHP workers.
>> All the suggestions made (disk - db, sessions) are reasonable, but you
>> have to know for sure.
>> AFAIR 502 as bad gateway on nginx side can be:
>> * workers just don't work at all for some reason (like fpm is not
>> started, or global shared memory crash et cetera). I don't think it's
>> your case.
>> * workers stall getting data from other resource, so all you workers
>> wait in some IO and there's no worker to serve new request.
>>
>> So assuming PHP workers just stall getting data from somewhere you
>> have to figure out exactly what operation time is increased at this
>> threshold.
>> You can either do it with manual timers and manual aggregation or
>> pinba. Possibly just looking in PHP error log can help: sometimes all
>> the workers stall just connecting to some resource and wait for a
>> connect timeout, so your PHP log will be full of corresponding errors.
>> But debugging with timers will definitely give you much more
>> information.
>>
>> Also having dedicated box for just one project "pm = ondemand" is not
>> the best choice, "static" is better.
>>
>> Hope this helps.
>>
>> On Thu, Jan 24, 2013 at 2:10 AM, Jim Hackett <jimh@exelate.com> wrote:
>> > My company is looking at a combination of PHP-FPM/APC/nginx to respond
>> to a
>> > very large number of requests, very quickly, globally. In general we
>> see a
>> > huge performance increase over lighttpd, however once we get to around
>> 200
>> > req/sec things get wacky. All of the idle processes are taken up and we
>> > start responding with 502 errors. We've increased shared memory, toyed
>> > around with nginx/php-fpm config to no avail. Until we high around 200
>> > req/sec everything looks really great, CPU levels are fine, memory is
>> fine,
>> > but once we hit that mark we get a lot of socket errors in the nginx
>> error
>> > log and many many 502 error. Any help would be GREATLY appreciated.
>> Below I
>> > have included some of our config, please let me know what else you need:
>> >
>> > php-fpm.conf
>> >
>> > [global]
>> > pid = /var/run/php-fpm/php-fpm.pid
>> > error_log = log/php-fpm_error.log
>> > log_level = debug
>> > ;emergency_restart_threshold = 0
>> > ;emergency_restart_interval = 0
>> > ;process_control_timeout = 0
>> >
>> > process.max = 500
>> >
>> > include=/etc/php-fpm.d/*.conf
>> >
>> > [www]
>> >
>> > user = nobody
>> > group = nobody
>> > listen = /tmp/pool1.socket
>> > listen.allowed_clients = 127.0.0.1
>> > slowlog = /var/log/php-fpm/www-slow.log
>> > request_slowlog_timeout = 1
>> > catch_workers_output = yes
>> > php_admin_value[error_log] = /var/log/php-fpm/www-error.log
>> > php_admin_flag[log_errors] = on
>> > rlimit_files = 50000
>> > request_terminate_timeout = 30s
>> >
>> > pm = ondemand
>> > pm.max_children = 300
>> > pm.process_idle_timeout = 2s
>> > pm.max_requests = 5000
>> >
>> > pm.status_path = /monitor/status.fpm
>> > listen.backlog=0
>> >
>> > nginx.conf
>> >
>> > user nobody nobody;
>> > worker_processes 10;
>> > pid /var/run/nginx.pid;
>> >
>> > events {
>> > worker_connections 1024;
>> > }
>> >
>> >
>> > http {
>> > include mime.types;
>> > default_type application/octet-stream;
>> > index index.html index.php;
>> > log_format main '$remote_addr - $remote_user [$time_local]
>> > "$request" '
>> > '$status $body_bytes_sent "$http_referer" '
>> > '"$http_user_agent" "$http_x_forwarded_for"
>> > $request_time';
>> > error_log /var/log/nginx/error.log;
>> > access_log off;
>> > sendfile on;
>> >
>> > keepalive_timeout 0s;
>> > fastcgi_read_timeout 360s;
>> >
>> > upstream phpfarm {
>> > server unix:/tmp/pool1.socket weight=100 max_fails=5 fail_timeout=5;
>> > }
>> >
>> > server {
>> > listen 80;
>> > listen 443 default_server ssl;
>> > root /srv/www/;
>> > fastcgi_busy_buffers_size 256k;
>> > fastcgi_buffers 4 256k;
>> > fastcgi_buffer_size 128k;
>> > fastcgi_temp_file_write_size 256k;
>> > proxy_buffering off;
>> > tcp_nopush on;
>> > tcp_nodelay on;
>> > auth_digest_user_file user.passwd;
>> > auth_digest_expires 300s;
>> > location /monitor {
>> > auth_digest 'Authorized users only';
>> > location /monitor/status {
>> > extended_status on;
>> > access_log off;
>> > allow all;
>> > }
>> > location /monitor/status.fpm {
>> > fastcgi_pass phpfarm;
>> > fastcgi_param SCRIPT_FILENAME
>> $document_root$fastcgi_script_name;
>> > fastcgi_param PATH_INFO $fastcgi_script_name;
>> > include fastcgi_params;
>> > }
>> > location /monitor/status/php-fpm {
>> > alias /usr/share/fpm/;
>> > allow all;
>> > }
>> > location ~ \.php$ {
>> > try_files $uri =404;
>> > fastcgi_pass phpfarm;
>> > fastcgi_param SCRIPT_FILENAME
>> $document_root$fastcgi_script_name;
>> > fastcgi_param PATH_INFO $fastcgi_script_name;
>> > include fastcgi_params;
>> > }
>> > }
>> >
>> > location ~ \.php$ {
>> > try_files $uri =404;
>> > fastcgi_pass phpfarm;
>> > fastcgi_param SCRIPT_FILENAME
>> $document_root$fastcgi_script_name;
>> > fastcgi_param PATH_INFO $fastcgi_script_name;
>> > include fastcgi_params;
>> > }
>> >
>> > location = /favicon.ico {
>> > return 204;
>> > access_log off;
>> > log_not_found off;
>> > }
>> > }
>> > }
>> >
>> > --
>> >
>> >
>> >
>>
>>
>> --
>>
>> wbr,
>> Alexey Rybak
>> Badoo Development (badoo.com)
>>
>> --
>>
>>
>>
>> --
>
>
>
>

--
Jérôme Loyet
Re: php-fpm processes max out even though CPU is not that high
January 24, 2013 04:52AM
Just be cautious about playing with listen.backlog.

The backlog defines the queue size for incoming requests on the kernel side
before it sends them to php-fpm (or any other network daemon). So, having
incoming requests in this queue means that php-fpm cannot handle incoming
requests when they arrive and they have to be queued waiting php-fpm to be
ready to handle them. It also means, for the request to be paused before
being handled by php-fpm. It's better than being rejected, but it implies a
delay in the response.

It's just a note to remember, and it's definetely not a perfect solution.
Ideally, listen.backlog should be set to 0 and php-fpm should be able to
handle all incoming requests smoothly. In practice, listen.backlog must not
be set to 0 as it's a "security" by delaying requests instead of dropping
them.

To monitor the size of the backlog queue, you can use the FPM status page
which returns the following information about the backlog queue:

; listen queue - the number of request in the queue of pending
; connections (see backlog in listen(2));
; max listen queue - the maximum number of requests in the queue
; of pending connections since FPM has started;
; listen queue len - the size of the socket queue of pending connections;

In high traffic website hosting, you have to understand every elements
in the process and be sure they works as expected. In this case, the
problem is not the backlog size. Increasing the backlog size could be
an imperfect solution, that's all. Focus on understanding why requests
are stuck in PHP (CPU boundaries, waiting for I/O, kernel scheduling,
....). From there you'll be able to find a proper solution. Maybe your
server is not sized enough. In this case, if you don't want to upgrade
the server (for financial reasons for example), increasing the backlog
can be a acceptable solution as it will "only" delay response and not
drop them.

I know these kind of problem can be tricky to resolv.

Good luck !

++ jerome



2013/1/24 Cristian Rusu <crirus@gmail.com>

> Hello
>
> I was/am in similar situation where it works for a while then suddenly it
> chokes with Nginx 502 for few minutes then it's fine even at higher users
> per seconds than at choke time.
>
> I discovered lately that we should put the listen.backlog = 4096 or bigger
> usually the same as your sysctl somaxconn
>
> you can set that like
> echo 'net.core.somaxconn=4096' >> /etc/sysctl.conf
> sysctl -p
>
> Also use unix sockets if nginx and php-fpm are on the same box, they are
> faster than TCP 127.0.0.1:9000 thing.
>
> I changed these things like 4 days ago and I am clear of the problem so
> far.
>
> Also set php-fpm slow logging, it's pretty useful I can see exact script
> line and function that slow things down
> request_terminate_timeout = 60s
> request_slowlog_timeout = 30
> slowlog = /var/log/php-fpm/www-slow.log
>
> I have it like this
>
>
> Make the php-fpm static as suggested above
> Put the pm.max_requests = 500 so that it kills php threads faster and
> prevent memory leaks. Better respawn than waste resources.
>
> Also make sure mysql is not the cause, many times it is. Mine is on a
> different box and never reported any issue, but I didn't manage to catch it
> in the act, as it's a matter of minutes usually after 1AM (obviously :)).
>
> Also play with timeouts, maybe increasing those for debug will pop-up the
> real error in another log than php-fpm. As far as 502, don't bother
> searching for it directly, it's not relevant. You can shut down mysql and
> Nginx say 502. :P
>
>
> ---------------------------------------------------------------
> Cristian Rusu
> Web Developement & Electronic Publishing
>
> ======
> Crilance.com
> Crilance.blogspot.com
>
>
> On Thu, Jan 24, 2013 at 10:54 AM, Jérôme Loyet <jerome@loyet.net> wrote:
>
>> Hi there
>>
>> Static pm is definitely the choice to make for high traffic websites.
>>
>> To figure out the bottleneck you can also try the slow request feature
>> Le 24 janv. 2013 09:37, "Alexey A. Rybak" <alexey.rybak@gmail.com> a
>> écrit :
>>
>> Hi!
>>>
>>> To determine what exactly causes the problem you should figure out
>>> what's going with/inside PHP workers.
>>> All the suggestions made (disk - db, sessions) are reasonable, but you
>>> have to know for sure.
>>> AFAIR 502 as bad gateway on nginx side can be:
>>> * workers just don't work at all for some reason (like fpm is not
>>> started, or global shared memory crash et cetera). I don't think it's
>>> your case.
>>> * workers stall getting data from other resource, so all you workers
>>> wait in some IO and there's no worker to serve new request.
>>>
>>> So assuming PHP workers just stall getting data from somewhere you
>>> have to figure out exactly what operation time is increased at this
>>> threshold.
>>> You can either do it with manual timers and manual aggregation or
>>> pinba. Possibly just looking in PHP error log can help: sometimes all
>>> the workers stall just connecting to some resource and wait for a
>>> connect timeout, so your PHP log will be full of corresponding errors.
>>> But debugging with timers will definitely give you much more
>>> information.
>>>
>>> Also having dedicated box for just one project "pm = ondemand" is not
>>> the best choice, "static" is better.
>>>
>>> Hope this helps.
>>>
>>> On Thu, Jan 24, 2013 at 2:10 AM, Jim Hackett <jimh@exelate.com> wrote:
>>> > My company is looking at a combination of PHP-FPM/APC/nginx to respond
>>> to a
>>> > very large number of requests, very quickly, globally. In general we
>>> see a
>>> > huge performance increase over lighttpd, however once we get to around
>>> 200
>>> > req/sec things get wacky. All of the idle processes are taken up and we
>>> > start responding with 502 errors. We've increased shared memory, toyed
>>> > around with nginx/php-fpm config to no avail. Until we high around 200
>>> > req/sec everything looks really great, CPU levels are fine, memory is
>>> fine,
>>> > but once we hit that mark we get a lot of socket errors in the nginx
>>> error
>>> > log and many many 502 error. Any help would be GREATLY appreciated.
>>> Below I
>>> > have included some of our config, please let me know what else you
>>> need:
>>> >
>>> > php-fpm.conf
>>> >
>>> > [global]
>>> > pid = /var/run/php-fpm/php-fpm.pid
>>> > error_log = log/php-fpm_error.log
>>> > log_level = debug
>>> > ;emergency_restart_threshold = 0
>>> > ;emergency_restart_interval = 0
>>> > ;process_control_timeout = 0
>>> >
>>> > process.max = 500
>>> >
>>> > include=/etc/php-fpm.d/*.conf
>>> >
>>> > [www]
>>> >
>>> > user = nobody
>>> > group = nobody
>>> > listen = /tmp/pool1.socket
>>> > listen.allowed_clients = 127.0.0.1
>>> > slowlog = /var/log/php-fpm/www-slow.log
>>> > request_slowlog_timeout = 1
>>> > catch_workers_output = yes
>>> > php_admin_value[error_log] = /var/log/php-fpm/www-error.log
>>> > php_admin_flag[log_errors] = on
>>> > rlimit_files = 50000
>>> > request_terminate_timeout = 30s
>>> >
>>> > pm = ondemand
>>> > pm.max_children = 300
>>> > pm.process_idle_timeout = 2s
>>> > pm.max_requests = 5000
>>> >
>>> > pm.status_path = /monitor/status.fpm
>>> > listen.backlog=0
>>> >
>>> > nginx.conf
>>> >
>>> > user nobody nobody;
>>> > worker_processes 10;
>>> > pid /var/run/nginx.pid;
>>> >
>>> > events {
>>> > worker_connections 1024;
>>> > }
>>> >
>>> >
>>> > http {
>>> > include mime.types;
>>> > default_type application/octet-stream;
>>> > index index.html index.php;
>>> > log_format main '$remote_addr - $remote_user [$time_local]
>>> > "$request" '
>>> > '$status $body_bytes_sent "$http_referer" '
>>> > '"$http_user_agent" "$http_x_forwarded_for"
>>> > $request_time';
>>> > error_log /var/log/nginx/error.log;
>>> > access_log off;
>>> > sendfile on;
>>> >
>>> > keepalive_timeout 0s;
>>> > fastcgi_read_timeout 360s;
>>> >
>>> > upstream phpfarm {
>>> > server unix:/tmp/pool1.socket weight=100 max_fails=5
>>> fail_timeout=5;
>>> > }
>>> >
>>> > server {
>>> > listen 80;
>>> > listen 443 default_server ssl;
>>> > root /srv/www/;
>>> > fastcgi_busy_buffers_size 256k;
>>> > fastcgi_buffers 4 256k;
>>> > fastcgi_buffer_size 128k;
>>> > fastcgi_temp_file_write_size 256k;
>>> > proxy_buffering off;
>>> > tcp_nopush on;
>>> > tcp_nodelay on;
>>> > auth_digest_user_file user.passwd;
>>> > auth_digest_expires 300s;
>>> > location /monitor {
>>> > auth_digest 'Authorized users only';
>>> > location /monitor/status {
>>> > extended_status on;
>>> > access_log off;
>>> > allow all;
>>> > }
>>> > location /monitor/status.fpm {
>>> > fastcgi_pass phpfarm;
>>> > fastcgi_param SCRIPT_FILENAME
>>> $document_root$fastcgi_script_name;
>>> > fastcgi_param PATH_INFO $fastcgi_script_name;
>>> > include fastcgi_params;
>>> > }
>>> > location /monitor/status/php-fpm {
>>> > alias /usr/share/fpm/;
>>> > allow all;
>>> > }
>>> > location ~ \.php$ {
>>> > try_files $uri =404;
>>> > fastcgi_pass phpfarm;
>>> > fastcgi_param SCRIPT_FILENAME
>>> $document_root$fastcgi_script_name;
>>> > fastcgi_param PATH_INFO $fastcgi_script_name;
>>> > include fastcgi_params;
>>> > }
>>> > }
>>> >
>>> > location ~ \.php$ {
>>> > try_files $uri =404;
>>> > fastcgi_pass phpfarm;
>>> > fastcgi_param SCRIPT_FILENAME
>>> $document_root$fastcgi_script_name;
>>> > fastcgi_param PATH_INFO $fastcgi_script_name;
>>> > include fastcgi_params;
>>> > }
>>> >
>>> > location = /favicon.ico {
>>> > return 204;
>>> > access_log off;
>>> > log_not_found off;
>>> > }
>>> > }
>>> > }
>>> >
>>> > --
>>> >
>>> >
>>> >
>>>
>>>
>>> --
>>>
>>> wbr,
>>> Alexey Rybak
>>> Badoo Development (badoo.com)
>>>
>>> --
>>>
>>>
>>>
>>> --
>>
>>
>>
>>
>
> --
>
>
>
>

--
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 248
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready