Welcome! Log In Create A New Profile

Advanced

Re: slow streaming due to high %utilization !!

B.R.
June 05, 2013 02:48AM
Hello,

On Wed, Jun 5, 2013 at 1:56 AM, shahzaib shahzaib <shahzaib.cb@gmail.com>wrote:

> Hello,
>
> We're using nginx-1.2.8 to stream large files size 1G but found the
> slow stream with high utilization of harddrive using command "iostat -x -d
> 3"
>
> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> avgrq-sz avgqu-sz await svctm %util
> sdc 0.00 444.00 78.67 7.00 18336.00 3608.00
> 256.16 3.28 39.17 11.02 94.40
> sdb 0.00 0.00 0.00 0.00 0.00
> 0.00 0.00 0.00 0.00 0.00 0.00
> sda 0.00 0.00 0.00 0.00 0.00
> 0.00 0.00 0.00 0.00 0.00 0.00
>
> 16G Ram
> 8X CPU E31240
> HDD SATA
>
> ​The best way of serving large files is to use asynchronous transfer to
avoid blocking disk requests.
To do so, have a look at the
aio<http://nginx.org/en/docs/http/ngx_http_core_module.html#aio>directive.
The default behavior uses sendfile, which sends file in a
synchronous fashion​.

Note 1: On Linux, aio will work for writing requests only if activated
alone. To activate asynchronous read requests (which seems coherent to your
use case, since you stream files - I understand yo usend them, thus you
read them), you need to activate the
directio<http://nginx.org/en/docs/http/ngx_http_core_module.html#directio>directive.
Refer to the aio documentaiton (link above) for details.

Note 2: Using directio will automatically disable sendfile. If you don't
use it, you'll need to deactivate sendfile manually.

You might also wanna have a look at your output_buffers (once again see the
aio directive documentation) to improve large files buffering.
I read around that buffers large enough (>= 4MiB) are usually advised for
big files (splitting them in less parts). Maybe you would like to make some
tests for your specific use case to find the most appropriate values.
The number of those buffers also have an impact: too much buffers consumes
RAM for nothing, whereas too little of them will slow down requests by
creating an artificial bottleneck.

Following are the main nginx configuration :-
>
> http {
> include mime.types;
> default_type application/octet-stream;
> #tcp_nopush on;
> client_body_buffer_size 128k;
> client_header_buffer_size 128k;
> client_max_body_size 1200M;
> keepalive_timeout 10;
> ignore_invalid_headers on;
> client_header_timeout 3m;
> client_body_timeout 3m;
> send_timeout 3m;
> reset_timedout_connection on;
> #gzip on;
> access_log off;
> include /usr/local/nginx/conf/vhosts/*;
>
> }
> ​​
>
>
​By default, Nginx sets gzip to off. Depending on the content you stream
and since you seem to have a nice CPU potential (you didn't provide
information on CPU usage, you'll need to see if you have room for gzip
compression there), you might wanna activate gzip to reduce the amount of
data transferred, thus lowering the transmission time at the cost of a more
sollicitated CPU... if your content provides room for a sensible
compression efficiency (once again, it depends on your specific use case).​
​No need​ for it if the benefit is 1% ;o)

>
>
> Would be very thankful If someone could help me with better configuration
> of nginx to reduce HDD-utilization.
>
> Best Regards..
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>

​No other idea.

Hope I helped,​
---
*B. R.*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

slow streaming due to high %utilization !!

shahzaib1232 June 05, 2013 01:58AM

Re: slow streaming due to high %utilization !!

B.R. June 05, 2013 02:48AM

Re: slow streaming due to high %utilization !!

shahzaib1232 June 05, 2013 04:08AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 74
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready