Maxim Dounin
October 04, 2013 11:12AM
Hello!

On Fri, Oct 04, 2013 at 09:43:05AM -0400, ddutra wrote:

> Hello Maxim,
> Thanks again for your considerations and help.
>
> My first siege tests against the ec2 m1.small production server was done
> using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
> considerations about 127.0.0.1 why I did the siege from the same server that
> is running nginx (production).
>
> The debian machine I am using for the tests has 4vcpus and runs nothing
> else. Other virtual machines run on this server but nothing too heavy. So I
> am "sieging" from a server that has way more power then the one running
> nginx. And I am sieging for a static html file on the production server that
> has 44.2kb.
>
> Lets run the tests again. This time I'll keep an eye on the siege cpu usage
> and overall server load using htop and vmware vsphere client.
>
> siege -c40 -b -t120s -i 'http://177.71.188.137/test.html' (agaisnt
> production)
>
> Transactions: 2010 hits
> Availability: 100.00 %
> Elapsed time: 119.95 secs
> Data transferred: 28.12 MB
> Response time: 2.36 secs
> Transaction rate: 16.76 trans/sec
> Throughput: 0.23 MB/sec
> Concurrency: 39.59
> Successful transactions: 2010
> Failed transactions: 0
> Longest transaction: 5.81
> Shortest transaction: 0.01

If this was a 44k file, this likely means you have gzip filter
enabled, as 28.12M / 2010 hits == 14k.

Having gzip enabled might indeed result in relatively high CPU
usage, and may result in such numbers in CPU-constrained cases.

For static html files, consider using gzip_static, see
http://nginx.org/r/gzip_static. Also consider tuning
gzip_comp_level to a lower level if you've changed it from a
default (1).

And, BTW, I've also tried to grab you exact test file from the
above link, and it asks for a password. Please note that checking
passwords is expensive operation, and can be very expensive
depending on password hash algorithms you use. If you test
against password-protected file - it may be another source of
slowness.

Just for reference, here are results from my virtual machine, a
45k file, with gzip enabled:

Transactions: 107105 hits
Availability: 100.00 %
Elapsed time: 119.30 secs
Data transferred: 1254.22 MB
Response time: 0.04 secs
Transaction rate: 897.80 trans/sec
Throughput: 10.51 MB/sec
Concurrency: 39.91
Successful transactions: 107105
Failed transactions: 0
Longest transaction: 0.08
Shortest transaction: 0.01

> Siege cpu usage was like 1~~2% during the entire 120s.

Please note that CPU percentage as printed for siege might be
incorrect and/or confusing for various reasons. Make sure to look
at _idle_ time on a server.

> On the other hand, ec2 m1.small (production nginx) was 100% the entire time.
> All nginx.

Ok, so you are CPU-bound, which is good. And see above
for possible reasons.

[...]

> I belive this machine I just did this test is more powerful then our
> notebooks. AVG CPU during the tests is 75%, 99% consumed by nginx. So it can
> only be something in nginx config file.
>
> Here is my nginx.conf
> http://ddutra.s3.amazonaws.com/nginx/nginx.conf
>
> And here is the virtualhost file I am fetching this test.html page com, it
> is the default virtual host and the same one I use for status consoles etc.
> http://ddutra.s3.amazonaws.com/nginx/default
>
>
> If you could please take a look. There is a huge difference between your
> results and mine. I am sure i am doing something wrong here.

The "gzip_comp_level 6;" in your config mostly explains things.
With gzip_compl_level set to 6 I get something about 450 r/s on my
notebook, which is a bit closer to your results. There is no need to
compress pages that hard - there is almost no difference in the
resulting document size, but there is huge difference in CPU time
required for compression.

Pagespeed also likely to consume lots of CPU power, and switching
it off should be helpfull.

--
Maxim Dounin
http://nginx.org/en/donation.html

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

ddutra October 03, 2013 12:34PM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

Maxim Dounin October 03, 2013 01:14PM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

ddutra October 03, 2013 03:00PM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

Maxim Dounin October 04, 2013 08:06AM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

ddutra October 04, 2013 09:43AM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

ddutra October 04, 2013 09:52AM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

Maxim Dounin October 04, 2013 11:12AM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

ddutra October 04, 2013 12:52PM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

Maxim Dounin October 04, 2013 01:46PM

Re: Nginx Fastcgi_cache performance - Disk cached VS tmpfs cached VS serving static file

ddutra October 04, 2013 01:43PM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 283
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready