Welcome! Log In Create A New Profile

Advanced

Re: multiple worker processes (problem or normal?)

tegrof
June 07, 2011 11:06PM
i think you could see the key word "[debug]", can't you ? that means it is NOT an error, just a normal case.
u can configure nginx to use the accept mutex lock.
these may be the case :
worker proccess 1:
1. try to get accept mutex lock, let's assume it success to get lock, it will add the listen fd to epoll
2. epoll_wait with max 500ms timer, then deal with events return from epoll_wait.
worker proccess 2:
1. try to get accept mutex lock, let's assume it does NOT success to get lock, it will NOT add the listen fd to epoll
2. epoll_wait with max 500ms timer, then deal with events return from epoll_wait.

accept mutex lock is useful in some system that still have thunder-herd problems, and it is said that linux does NOT have this problem now.


2011-06-08



tegrof



发件人: Zhu Qun-Ying
发送时间: 2011-06-08 01:48:31
收件人: nginx-devel
抄送:
主题: multiple worker processes (problem or normal?)

Hi,
When testing nginx 1.0.4, I found the error message from error.log:
2011/06/07 10:36:29 [debug] 10939#0: timer delta: 501
2011/06/07 10:36:29 [debug] 10939#0: posted events 00000000
2011/06/07 10:36:29 [debug] 10939#0: worker cycle
2011/06/07 10:36:29 [debug] 10939#0: accept mutex lock failed: 0
2011/06/07 10:36:29 [debug] 10939#0: epoll timer: 500
2011/06/07 10:36:29 [debug] 10939#0: timer delta: 500
2011/06/07 10:36:29 [debug] 10939#0: posted events 00000000
2011/06/07 10:36:29 [debug] 10939#0: worker cycle
2011/06/07 10:36:29 [debug] 10939#0: accept mutex lock failed: 0
2011/06/07 10:36:29 [debug] 10939#0: epoll timer: 500
2011/06/07 10:36:30 [debug] 10939#0: timer delta: 501
2011/06/07 10:36:30 [debug] 10939#0: posted events 00000000
2011/06/07 10:36:30 [debug] 10939#0: worker cycle
2011/06/07 10:36:30 [debug] 10939#0: accept mutex lock failed: 0
2011/06/07 10:36:30 [debug] 10939#0: epoll timer: 500
2011/06/07 10:36:30 [debug] 10939#0: timer delta: 501
2011/06/07 10:36:30 [debug] 10939#0: posted events 00000000
2011/06/07 10:36:30 [debug] 10939#0: worker cycle
2011/06/07 10:36:30 [debug] 10939#0: accept mutex lock failed: 0
2011/06/07 10:36:30 [debug] 10939#0: epoll timer: 500
2011/06/07 10:36:31 [debug] 10939#0: timer delta: 500
It keeps repeating itself every half a second trying to acquired the
accept lock.
Does it normal?
My system is Slackware 13.37, uname -a output:
Linux qy83 2.6.39.1 #1 SMP PREEMPT Fri Jun 3 14:00:11 PDT 2011 i686
Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz GenuineIntel GNU/Linux
ps output:
10937 ? Ss 0:00 nginx: master process /usr/sbin/nginx -c
/etc/nginx/n
10938 ? S 0:00 nginx: worker process
10939 ? S 0:00 nginx: worker process
nginx -V output:
nginx: nginx version: nginx/1.0.4
nginx: TLS SNI support enabled
nginx: configure arguments: --prefix=/usr --sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid
--lock-path=/var/lock/nginx --user=nobody --group=nogroup
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --with-rtsig_module
--with-select_module --with-poll_module --with-http_ssl_module
--with-http_realip_module --with-http_addition_module
--with-http_xslt_module --with-http_sub_module --with-http_dav_module
--with-http_flv_module --with-http_gzip_static_module
--with-http_random_index_module --with-http_secure_link_module
--with-http_stub_status_module --with-http_perl_module
--with-perl_modules_path=/usr/lib/perl5/vendor_perl/5.12.3
--http-client-body-temp-path=/var/tmp/nginx_client_body_temp
--http-proxy-temp-path=/var/tmp/nginx_proxy_temp
--http-fastcgi-temp-path=/dev/shm --without-mail_pop3_module
--without-mail_imap_module --without-mail_smtp_module --with-debug
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz
stepping : 10
cpu MHz : 2000.000
cache size : 6144 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc
arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority
bogomips : 6000.17
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz
stepping : 10
cpu MHz : 2000.000
cache size : 6144 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
initial apicid : 1
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc
arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority
bogomips : 5999.65
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://nginx.org/mailman/listinfo/nginx-devel
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://nginx.org/mailman/listinfo/nginx-devel
Subject Author Views Posted

multiple worker processes (problem or normal?) Attachments

Zhu Qun-Ying 7982 June 07, 2011 01:50PM

Re: multiple worker processes (problem or normal?)

Igor Sysoev 3093 June 07, 2011 01:54PM

Re: multiple worker processes (problem or normal?)

tegrof 2639 June 07, 2011 11:06PM

Re: multiple worker processes (problem or normal?)

Igor Sysoev 2378 June 07, 2011 11:30PM



Sorry, you do not have permission to post/reply in this forum.

Online Users

Guests: 203
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready