I just looked a little bit more on the topic and it is not possible I believe. I would have to put something in front of nginx (another nginx) or Varnish - but that is a shame since nginx fastcgi_cache works so fine. Best regards.by ddutra - Nginx Mailing List - English
Hi guys, First of all, I am aware that this is not the place to get ngx_pagespeed support. I am only coming here because I am this close to archieving the performance I need, and here is the place where I had the most sucess with my questions so far. Forgive me if it is out of place. Second, I would like to know if there is a NGINX workaround for this problem, not a ngx_pagespeed solution.by ddutra - Nginx Mailing List - English
Maxim Thanks for your time. It really works. Thanks alot!by ddutra - Nginx Mailing List - English
Hello guys, I would like to know if it is possible to have multiple fastcgi_cache_path / keys_zone. If I host multiple websites and all share the same keys_zone, it becomes a problem if I have to purge the cache. I cannot purge it for a single website, only for all of them. This is more out of curiosity than a real problem. Best regards.by ddutra - Nginx Mailing List - English
As promised here are my stats on vmware 4 vcpus siege -c50 -b -t240s -i 'http://127.0.0.1/test.html' gzip off, pagespeed off. Transactions: 898633 hits Availability: 100.00 % Elapsed time: 239.55 secs Data transferred: 39087.92 MB Response time: 0.01 secs Transaction rate: 3751.34 trans/sec Throughput:by ddutra - Nginx Mailing List - English
Maxim, Thank you again. About my tests, FYI I had httpauth turned off for my tests. I think you nailed the problem. This is some new information for me. So for production I have a standard website which is php being cached by fastcgi cache. All static assets are served by nginx, so gzip_static will do the trick if I pre-compress them and it will save a bunch of cpu. What about the caby ddutra - Nginx Mailing List - English
Well, I just looked at the results again and it seems my Throughput (mb per s) are not very far from yours. My bad. So results not that bad right? What do you think. Best regards.by ddutra - Nginx Mailing List - English
Hello Maxim, Thanks again for your considerations and help. My first siege tests against the ec2 m1.small production server was done using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your considerations about 127.0.0.1 why I did the siege from the same server that is running nginx (production). The debian machine I am using for the tests has 4vcpus and runs nothing else. Other vby ddutra - Nginx Mailing List - English
Maxim Dounin Wrote: ------------------------------------------------------- > The 15 requests per second for a static file looks utterly slow, > and first of all you may want to find out what's a limiting factor > in this case. This will likely help to answer the question "why > the difference". > > From what was previously reported here - communication wiby ddutra - Nginx Mailing List - English
Hello guys, First of all, thanks for nginx. It is very good and easy to setup. And it is kind of a joy to learn about it. Two warnings: this performance thing is addictive. Every bit you squeeze, you want more. And English is my second language so pardon me for any mistakes. Anyways I am comparing nginx performance for wordpress websites in different scenarios and something seems weird. Sby ddutra - Nginx Mailing List - English