And also the 20+ lines of vmstat are given below with 2.6.32 kernal :-
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
0 1 0 259020 49356 31418328 0 0 64 24 0 4 5 0
95 0 0
1 0 0 248100 49356 31418564 0 0 704 4 35809 3159 0 1
99 0 0
0 0 0 245248 49364 31419856 0 0 1340 48 35114 3217 0 0
99 0 0
1 0 0 243884 49364 31421084 0 0 940 4 35176 3106 0 0
99 0 0
0 0 0 243512 49364 31422152 0 0 812 4 35837 3204 0 0
99 0 0
0 0 0 241608 49364 31423056 0 0 1304 4 35585 3177 1 1
98 0 0
1 0 0 241076 49364 31424132 0 0 1004 4 35774 3199 0 0
99 0 0
0 0 0 241332 49372 31424644 0 0 724 76 35526 3203 0 0
99 0 0
0 0 0 240464 49372 31425376 0 0 776 4 35968 3162 0 0
99 0 0
0 1 0 238236 49372 31426244 0 0 652 4 35705 3131 0 0
99 0 0
0 0 0 234632 49372 31426924 0 0 1088 4 36220 3309 0 1
99 0 0
0 0 0 233640 49372 31428492 0 0 872 4 35663 3235 0 1
99 0 0
0 0 0 232896 49376 31429016 0 0 1272 44 35403 3179 0 0
99 0 0
1 0 0 231024 49376 31430064 0 0 528 4 34713 3238 0 0
99 0 0
0 0 0 239644 49376 31430564 0 0 808 4 35493 3143 0 1
99 0 0
3 0 0 241704 49376 31431372 0 0 612 4 35610 3400 1 1
97 0 0
1 0 0 244092 49376 31432028 0 0 280 4 35787 3333 1 1
99 0 0
2 0 0 244348 49376 31433232 0 0 1260 8 34700 3072 0 0
99 0 0
0 0 0 243908 49384 31433728 0 0 512 32 35019 3145 0 1
99 0 0
1 0 0 241104 49384 31435004 0 0 1440 4 35586 3211 0 1
99 0 0
0 0 0 234600 49384 31435476 0 0 868 4 35240 3235 0 1
99 0 0
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
1 0 0 233656 49384 31436376 0 0 704 4 35297 3126 0 1
99 0 0
0 0 0 233284 49384 31437176 0 0 192 4 35022 3202 0 0
99 0 0
0 0 0 228952 49392 31437336 0 0 868 32 34986 3211 0 1
99 0 0
0 0 0 232176 49392 31438124 0 0 448 4 35785 3294 0 1
99 0 0
0 0 0 230076 49392 31438664 0 0 1052 4 35532 3297 1 1
98 0 0
1 0 0 231184 49392 31439608 0 0 436 4 34967 3177 0 1
99 0 0
1 0 0 224300 49392 31440044 0 0 624 4 34577 3216 0 1
99 0 0
0 0 0 223748 49396 31440664 0 0 460 44 34415 3155 0 0
99 0 0
1 0 0 223260 49396 31441612 0 0 768 4 35287 3194 0 1
99 0 0
0 0 0 230464 49396 31441996 0 0 772 4 35140 3208 0 0
99 0 0
1 0 0 225504 49396 31442668 0 0 564 4 35316 3133 0 0
99 0 0
On Thu, Jan 24, 2013 at 12:00 AM, shahzaib shahzaib
<shahzaib.cb@gmail.com>wrote:
> Following is the output of 2200+ concurrent connections and kernel version
> is 2.6.32 :-
>
>
> Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local) 01/23/2013
> _x86_64_ (16 CPU)
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 1.75 3.01 0.49 0.13 0.00 94.63
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 23.27 2008.64 747.29 538482374 200334422
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.97 0.00 1.10 0.19 0.00 97.74
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 30.00 2384.00 112.00 2384 112
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.13 0.00 0.52 0.13 0.00 99.22
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 21.00 1600.00 8.00 1600 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.19 0.00 0.45 0.26 0.00 99.10
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 37.00 2176.00 8.00 2176 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.45 0.00 0.58 0.19 0.00 98.77
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 24.00 1192.00 8.00 1192 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.32 0.00 0.45 0.19 0.00 99.03
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 29.00 2560.00 8.00 2560 8
>
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.32 0.00 0.65 0.19 0.00 98.84
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 35.00 2584.00 152.00 2584 152
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.26 0.00 0.39 0.39 0.00 98.96
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 25.00 1976.00 8.00 1976 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.32 0.00 0.52 0.39 0.00 98.77
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 33.00 1352.00 8.00 1352 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.26 0.00 0.58 0.26 0.00 98.90
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 28.00 2408.00 8.00 2408 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.45 0.00 0.65 0.06 0.00 98.84
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 37.00 1896.00 8.00 1896 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.71 0.00 0.97 0.13 0.00 98.19
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 33.00 2600.00 64.00 2600 64
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.32 0.00 0.65 0.26 0.00 98.77
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 20.00 1520.00 8.00 1520 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.19 0.00 0.39 0.19 0.00 99.22
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 49.00 3088.00 80.00 3088 80
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.26 0.00 0.91 0.26 0.00 98.58
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 48.00 1328.00 8.00 1328 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.32 0.00 0.32 0.26 0.00 99.09
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 32.00 1528.00 8.00 1528 8
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.45 0.00 0.58 0.39 0.00 98.58
>
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 35.00 1624.00 72.00 1624 72
>
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.39 0.00 0.58 0.19 0.00 98.84
>
>
>
> On Wed, Jan 23, 2013 at 11:07 PM, Lukas Tribus <luky-37@hotmail.com>wrote:
>
>>
>> Can you send us a 20+ lines of output from "vmstat 1" under this load?
>> Also, what exact linux kernel are you running ("cat /proc/version")?
>>
>>
>> ________________________________
>> > Date: Wed, 23 Jan 2013 21:51:43 +0500
>> > Subject: Re: Nginx flv stream gets too slow on 2000 concurrent
>> connections
>> > From: shahzaib.cb@gmail.com
>> > To: nginx@nginx.org
>> >
>> > Following is the output of 3000+ concurrent connections on iostat 1
>> > command :-
>> >
>> > avg-cpu: %user %nice %system %iowait %steal %idle
>> > 1.72 2.96 0.47 0.12 0.00 94.73
>> >
>> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
>> > sda 22.47 1988.92 733.04 518332350 191037238
>> >
>> > avg-cpu: %user %nice %system %iowait %steal %idle
>> > 0.39 0.00 0.91 0.20 0.00 98.50
>> >
>> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
>> > sda 22.00 2272.00 0.00 2272 0
>> >
>> > avg-cpu: %user %nice %system %iowait %steal %idle
>> > 0.46 0.00 0.91 0.07 0.00 98.57
>> >
>> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
>> > sda 23.00 864.00 48.00 864 48
>> >
>> > avg-cpu: %user %nice %system %iowait %steal %idle
>> > 0.39 0.00 0.72 0.33 0.00 98.56
>> >
>> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
>> > sda 60.00 3368.00 104.00 3368 104
>> >
>> > avg-cpu: %user %nice %system %iowait %steal %idle
>> > 0.20 0.00 0.65 0.20 0.00 98.95
>> >
>> >
>> >
>> > On Wed, Jan 23, 2013 at 8:30 PM, shahzaib shahzaib
>> > <shahzaib.cb@gmail.com<mailto:shahzaib.cb@gmail.com>> wrote:
>> > Sketchboy, i sent you the output of only 1000 concurrent connections
>> > because it wasn't peak hours of traffic. I'll send you the output of
>> > iostat 1 when concurrent connections will hit to 2000+ in next hour.
>> > Please keep in touch cause i need to resolve this issue :(
>> >
>> >
>> > On Wed, Jan 23, 2013 at 8:21 PM, skechboy
>> > <nginx-forum@nginx.us<mailto:nginx-forum@nginx.us>> wrote:
>> > From your output I can see that it isn't IO issue, I wish I could help
>> you
>> > more.
>> >
>> > Posted at Nginx Forum:
>> > http://forum.nginx.org/read.php?2,235447,235476#msg-235476<
>> http://forum.nginx.org/read.php?2%2c235447%2c235476#msg-235476>
>> >
>> > _______________________________________________
>> > nginx mailing list
>> > nginx@nginx.org<mailto:nginx@nginx.org>
>> > http://mailman.nginx.org/mailman/listinfo/nginx
>> >
>> >
>> >
>> > _______________________________________________ nginx mailing list
>> > nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx