You can get all that and a lot, lot more if you build a debug enabled version of nginx Sent from my iPhone > On Jan 19, 2017, at 11:49 AM, Nikolaos Milas <nmilas@noa.gr> wrote: > > Hello, > > I am running nginx 1.10.2 on CentOS 6. > > I am trying to configure a new (virtual) website and I am having problems. I would like to be able to log details of the evaluation oby pbooth - Nginx Mailing List - English
I'm curious, why are you using tmpfs for your cache store? With fast local storage bring so cheap, why don't you devote a few TB to your cache? When I look at the techempower benchmarks I see that openresty (an nginx build that comes with lots of lua value add) can serve 440,000 JSON responses per sec with 3ms latency. That's on five year old E7-4850 Westmere hardware at 2.0GHz, with 10G NICs. Thby pbooth - Nginx Mailing List - English
All hosts have characteristic stalls and blips but the scale of this issue can vary 100x depending on is configuration. You can get some data about these stalls using solar flare's sysjitter utility or Gil Tene's jhiccup. Sent from my iPhone On Jan 10, 2017, at 12:46 PM, Руслан Закиров <ruz@sports.ru> wrote: >> > > The "upstream timeout ... while connecting tby pbooth - Nginx Mailing List - English
> Both NIC supports the speed of 1000Mb/s How do you know? Your kernel or NIC config might be limiting you. iperf, snfnettest, or etherate will show you the maximum possible bandwidth at the TCP or IP layer. If it's under 700 then you know to focus on the NIC and OS. If it's above 900 then the problem is in your nginx or your test workload.by pbooth - Nginx Mailing List - English
You said that your test case peaks at 600Mbit/sec. Your first step should be to bisect the problem, to see if you're limited by your hardware+OS or your test + nginx configuration. Easiest way is to install solarflare's free network test utility from the support section of their website. After that, to dig further into web specific factors it can be worth install the (large) Tech Empower web frby pbooth - Nginx Mailing List - English
Sent from my iPhone > >> On Saturday 18 June 2016 14:12:31 B.R. wrote: >> There is no downside on the server application I suppose, especially since, >> as you recalled, nginx got no trouble for it. >> >> One big problem is, there might be socket exhaustion on the TCP stack of >> your front-end machine(s). Remember a socket is defined by a triple >> &by pbooth - Nginx Mailing List - English
I'm wondering if someone can help with the following? I have a java app where I'm using nginx as a caching reverse proxy. I have a location defined for five distinct JSPs and different cache configurations and custom keys for each. Some locations are using: proxy_ignore_headers Set-Cookie Proxy_pass_header off proxy_hide_header Set-Cookie To ensure that responses that set cookies can be safely caby pbooth - Nginx Mailing List - English