I think that part of the power and challenge of using nginx’s caching is that there are many different ways of achieving the same or similar results, but some of the approaches will be more awkward than others. I think that it might help if you could express what the issue is that you are trying to solve, as opposed to the mechanism that you are wanting to use to solve your problem, for examby pbooth - Nginx Mailing List - English
The tech empower web framework benchmark is a set of six micro benchmarks implemented with over 100 different web frameworks. It’s free, easy to setup, and comes as prebuilt docker containers. Sent from my iPhone > On Jan 26, 2018, at 2:27 PM, leeand00 <nginx-forum@forum.nginx.org> wrote: > > Does anyone have a suggestion about a simple, free, open source web app, > with a dby pbooth - Nginx Mailing List - English
So some questions: What hardware is this? Are they 16 “real” cores or hyper threaded cores? Do you have a test case setup so you can readily measure the impact of change? Many tunings that involve numa will only show substantial results ion specific app What does cat /proc/cpuinfo | tail -28 return? When you say maxed out do you literally mean that cores 6,7 show 100% CPU utilization > Oby pbooth - Nginx Mailing List - English
Perhaps you should use pidstat to validate which processes are running on the two busy cores? > On Jan 11, 2018, at 6:25 AM, Vlad K. <nginx-ml@acheronmedia.hr> wrote: > > On 2018-01-11 11:59, Lucas Rolff wrote: >> Now, in your case with php-fpm in the mix as well, controlling that >> can be hard ( not sure if you can pin php-fpm processes to cores ) – >> but foby pbooth - Nginx Mailing List - English
Wade, This reminds me of something I once saw with an application that was making web service requests to FedEx. So are you saying that the response times are bimodal? That you either get a remote response within a few seconds or the request takes more than 60 seconds, and that you have no 20sec,30sec,40sec requests? And, if so, do those 60+ sec requests ever get a healthy response? Sent fromby pbooth - Nginx Mailing List - English
Wade, I think that you are asking “hey why isn’t nginx behaving identically on MacOS and Linux when create a servlet that invokes Thread.sleep(300000) before it returns a response?.” Am I reading you correctly? A flippant response would be to say: “because OS/X and Linux are different OSes that behave differently” It would probably help us if you explained a little more about your teby pbooth - Nginx Mailing List - English
Are you running apache bench on the sam for different host? How big is the javascript file? What is your ab command line? If your site is to be static published (which is a great idea) why are you using SSL anyway? > On 4 Jan 2018, at 6:12 PM, eFX News Development <dev@efxnews.com> wrote: > > Hello! Thanks for your response. I'm using apache bench for the tests, simply hittingby pbooth - Nginx Mailing List - English
Take a look at the stream directive in the nginx docs. I’ve used that to proxy an https connection to a backend when I needed to make use of preecisting SSO Sent from my iPhone > On Dec 6, 2017, at 5:47 PM, Nicolas Legroux <nuco2005@gmail.com> wrote: > > Hi, > > I'm wondering if it's possible to do what's described in the mail subject ? > I've had a look through Inteby pbooth - Nginx Mailing List - English
First Step Use something like http://www.kloth.net/services/nslookup.php To check the IP addresses returned for all six names (with and without www for the three domains) Do these look correct? Sent from my iPhone > On Dec 6, 2017, at 5:27 PM, qwazi <nginx-forum@forum.nginx.org> wrote: > > I'm new to nginx but needed a solution like this. It's very cool but I'm a > newbieby pbooth - Nginx Mailing List - English
I’ve used the equivalent of nodelay with a rate of 2000 req/sec per IP when a retail website was being attacked by hackers. This was in combination with microcaching and CDN to protect the back end and endure the site could continue to function normally. Sent from my iPhone > On Dec 4, 2017, at 1:11 AM, Peter Booth <peter_booth@me.com> wrote: > > I’m a situation where you areby pbooth - Nginx Mailing List - English
I’m a situation where you are confident that the workload is coming from a DDOS attack and not a real user. For this example the limit is very low and nodelay wouldn’t seem appropriate. If you look at the techempower benchmark results you can see that a single vote VM should be able to serve over 10,000 requests per sec. Sent from my iPhone > On Dec 3, 2017, at 4:08 PM, Gary <lists@lby pbooth - Nginx Mailing List - English
So what exactly are you trying to protect against? Against “bad people” or “my website is busier than I think I can handle?” Sent from my iPhone > On Nov 30, 2017, at 6:52 AM, "tongshushan@migu.cn" <tongshushan@migu.cn> wrote: > > a limit of two connections per address is just a example. > What does 2000 requests mean? Is that per second? yes,it's QPS.by pbooth - Nginx Mailing List - English
There are many things that *could* cause what you’re seeing - say at least eight. You might be lucky and guess the right one- but probably smarter to see exactly what the issue is. Presumably you changed your upstream webservers to do this work, replacing ssl with unencrypted connections? Do you have sar data showing #tcp connections before and after the change? Perhaps every request is negotiby pbooth - Nginx Mailing List - English
Can you count the number of files that are in your cache and whether or not it's changing with time? Then compare with the number of unique cache keys (from your web server log) When the server starts returning a MISS - does it only do this for newer objects that haven’t been requested before? Does it happen for any objects that had previously been returned as a HIT? > On Nov 28, 2017, atby pbooth - Nginx Mailing List - English
FWIW - I have found rate limiting very useful (with hardware LB as well as nginx) but, because of the inherent burstiness of web traffic, I typically set my threshold to 10x or 20x my expected “reasonable peak rate.” The rationale is that this is a very crude tool, just one of many that need to work together to protect the backend from both reasonable variations in workload and malicious use.by pbooth - Nginx Mailing List - English
You need to understand, step-by-stp, exactly what is happening. Here is one (of many) ways to do this: 1. Open the Chrome browser 2. Right click on the background and select inspect, this will open the developer tools page 3. Select the tab “Network” which shows you the HTTp requests issued for the current page. 4. Select the check-box preserve log which means that prior pages will still beby pbooth - Nginx Mailing List - English
This is true in general, but with a single exception that I know of. It’s common for nginx to proxy requests to a Rails app or Java app on an app server and for the app server to implement the session logic This is an open-resty session implementation that sits within the nginx process. https://github.com/bungle/lua-resty-session https://github.com/bungle/lua-resty-session > On Nov 10,by pbooth - Nginx Mailing List - English
I think that this discussion touches on another question - are millisecond timings still sufficient when monitoring web applications? I think that in 2017, with the astounding increases in processing power we have seen in the last decade, millisecond timings are too imprecise. The cost of capturing a timestamp in Linux on recent hardware is about 30 nanos, and the precision of such a timestampby pbooth - Nginx Mailing List - English
There are a few approaches to this but they depend upon what you’re trying to achieve. Are your requests POSTs or GETs? Why do you have the mirroring configured? If the root cause is that your mirror site cannot support the same workload as your primary site, what do you want to happen when your mirror site is overloaded? One approach, using nginx, is to use rate limiting and connection limiby pbooth - Nginx Mailing List - English
Agree, I work as performance architect , specializing in improving the performance of trading applications and high traffic web sites. When I first began tuning Apache (and then nginx) I realized the the internet was full of “helpful suggestions” about why you should set configuration X to this number. What took me more than ten years to learn was that 95% of these tips are useless, becaby pbooth - Nginx Mailing List - English
So this message can be interpreted: > NOTICE: child 25826 exited with code 0 after 864.048588 seconds > from start The code 0 means that the child exited normally 864 seconds after it had started. In other words, it chose to die (probably after serving 800 or 2500 requests). Now if your access.log indicates *which* php process serves a request then you should be able to work out whetheby pbooth - Nginx Mailing List - English
Agree, Can you email me offline. I might have a few ideas on how to assist. Peter peter _ booth @ me.com > On Oct 16, 2017, at 3:55 PM, agriz <nginx-forum@forum.nginx.org> wrote: > > Sir, > > Thank you for your reply. > > This is a live server. > It is an NPO (non profit organisation). > I pay for the server and maintaining it. We cant afford to a admin. &gby pbooth - Nginx Mailing List - English
You said this > On Oct 16, 2017, at 3:30 PM, Peter Booth <peter_booth@me.com> wrote: > > If i change the values, it hangs with 3k or 5k visitors. > This one handle 5k to 8k what hangs? the host or the nginx worker processes or the PHP or the mysql? You need to capture some diagnostic information over time to see whats going on here. e.g. (ps, net stat, sar -A, lidstaat -h -rby pbooth - Nginx Mailing List - English
Advice - instead of tweaking values, first work out what is happening, locate the bottleneck, then try adjusting things when you have a theory First QN you need to answer: For your test, is your system as a whole overloaded? As in, for he duration of the test is the #req/se supported constant? Is the request time shown in nginx log increasing? If you capture the output of net stat -ant | grepby pbooth - Nginx Mailing List - English
Sounds like the problem is that you don’t have nginx configured to enforce canonical urls. What do I mean by this? Imagine that every page on the site has one and only one “correct URL” So someone might type http://www.mydomain.com http://mydomain.com http://www.mydomain.com/index.html and expect to see the same page. A site that enforces canonical URLs would do a redirect from the noby pbooth - Nginx Mailing List - English
Why do you want to "realize a distributed caching layer based on disk-speed and storage?” Providing that you are running nginx on a healthy host running linux then your HDD-cache be faster (or the seem speed) as your SSD-cache. This because the cached file will be written though the Linux page cache, just as reads will return the file from Linux page cache and not touch either of the dby pbooth - Nginx Mailing List - English
I can say that Maxim's idea of using tcp proxying with the streams module Is very simple to configure - just a couple of lines, and tremendously useful. Sent from my iPhone > On Oct 4, 2017, at 3:24 PM, pankaj@releasemanager.in <nginx-forum@forum.nginx.org> wrote: > > Maxim, > > totally agree on your statement and options. > > But still i was wondering if there'sby pbooth - Nginx Mailing List - English
I found it useful to define a dropCache location that will delete the cache on request. I did this with a shell script that I invoked with lua (via openresty) but I imagine there are multiple ways to do this. Sent from my iPhone > On Oct 4, 2017, at 11:39 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > > Hello! > >> On Wed, Oct 04, 2017 at 06:01:35AM -0400, Dingo wrote:by pbooth - Nginx Mailing List - English
Pankaj, I can’t understand exactly what you are saying. But I’m confident that here will be a way for nginx to work for you, providing you ask the question in a clear, unambiguous fashion. Is your application behind nginx, such that nginx is POSTING to the app? Or is your application making the request to nginx which is in front of another back-end? Is so, what is the back-end? How much datby pbooth - Nginx Mailing List - English
Lots of questions: What are the upstream requests? Are you logging hits and misses for the cache - what's the hit ratio? What size are the objects that you are serving? How many files are there in your cache? What OS and what hardware are you using? If it's Linux can you show the results of the following: cat /proc/cpuinfo | tail -30 cat /proc/meminfo Sent from my iPhone > On Sep 20, 2017, aby pbooth - Nginx Mailing List - English