From a shell on your nginx host you can run something like netstat -ant | egrep “ESTAB” to see all the open TCP connections. If you run your command line with watch you will see it update each two seconds, etc .. FWIW A long time ago I did a bunch of experiments with different load balancer strategies using both f5 LTM and nginx. This suggested that the simplest strategy, round-robin was optby pbooth - Nginx Mailing List - English
Gary, This was interesting to read. There was one thing that wasn’t obvious to me however. What was the high level problem that you were solving with this specific configuration? Curiously Peter Sent from my iPhone > On Oct 30, 2020, at 3:16 PM, garycnew@yahoo.com <nginx-forum@forum.nginx.org> wrote: > > All: > > After reviewing the iptables chains workflow, I dby pbooth - Nginx Mailing List - English
I agree with the advice already given It can also be useful to track the User-Agent header of web requests - both to understand who is trying to do what to your website, and then to start blocking on the basis of user agent. There may be some bots and spiders that are helpful or even necessary for your business. Peter > On Aug 24, 2020, at 2:54 PM, lists <lists@lazygranch.com> wroteby pbooth - Nginx Mailing List - English
Why are you doing an nginx POC? To be blunt, nginx is the most powerful, flexible web server/reverse proxy/application delivery software product that exists. If it has an obvious competitor it’s the F5 BigIP LTM/WAF device - and F5 owns nginx. So what does this mean? It means that if you don’t have hands on experience with these two products then you cant really appreciate what is possibleby pbooth - Nginx Mailing List - English
Why do you want to do this at all? What is the real underlying problem that you are attempting to solve? > On Nov 11, 2019, at 8:29 AM, Kostya Velychkovsky <velychkovsky@gmail.com> wrote: > > I use Linux, and had a bad experience with Linux shaper (native kernel QoS mechanism - tc ), it consumed a lot of CPU and worked unstable. So I rejected the idea to keep using it. > &by pbooth - Nginx Mailing List - English
Is your web server on the internet? If so then see what redbot shows. It’s an amazing tool to debug nuanced http issues Sent from my iPhone > On Oct 9, 2019, at 1:52 AM, Ken Wright <wizard@bnnorth.net> wrote: > > Sorry to be taking up so much bandwidth lately, but I'm seeing some > weird behavior from nginx. > > When I enter my domain name in Firefox, I get a 404 sby pbooth - Nginx Mailing List - English
Sure is. Look at the stale-if-error stale-while-revalidate proxy_cache_use_stale proxy_cache_lock etc Can you describe the use case a bit more? Why don't you want to cache this particular content? Is it that its dynamic and a fresher version is always preferable but the stale is good enough in the event of an error? Or is there more to it than that? Sometimes people build sites that are “morby pbooth - Nginx Mailing List - English
How large is a large POST payload? Are the nginx and upstream systems physical hosts in same data center? What are approx best case / typical case / worst case latency for the post to upstream? Sent from my iPhone > On Jun 22, 2018, at 2:40 PM, scott.oaks@oracle.com wrote: > > I have an nginx proxy through which clients pass a large POST payload to the upstream server. Sometimes, the uby pbooth - Nginx Mailing List - English
Your question raises so many other questions: 1. The static content - jpg, png, tiff, etc. It looks as though you are serving them your backend and caching them. Are they also being built on demand dynamically? If not, then why csche them? Why not deploy them to nginx and serve them directly? 2. The text content - is this fragments of html that don’t have names that end in html? Sent from myby pbooth - Nginx Mailing List - English
Sounds weird. 1. It doesn’t make sense for your cache to be on a tmpfs share. Better to use s physical disk allow Linux ‘s page csche to do its job 2. How big are the files in the larger cache? Min/median/max? Sent from my iPhone > On Jun 20, 2018, at 7:38 AM, rihad <nginx-forum@forum.nginx.org> wrote: > > Have you been able to solve the issue? We're having the same probby pbooth - Nginx Mailing List - English
Is your client running n a different host than your server? > On 8 Jun 2018, at 5:35 AM, prabhat <nginx-forum@forum.nginx.org> wrote: > > I am taking performance data on nginx. > The client I used is h2load > > Request per second using h2 is much higher than h2c. But I think it should > not be as h2 is having the overhead of ssl. > I have used the command > ./hby pbooth - Nginx Mailing List - English
Dont. You should let every tier do it’s job. Just because nginx has geoid functionality doesn’t mean that you should use it. If you are lucky enough to have hardware load balancer in front of nginx then do the blocking there, so you reduce the load on your nginx. The Golden Rule of keeping websites up is “Protect the back-end.” The best way to do that is to handle the request as soon asby pbooth - Nginx Mailing List - English
If you can dump your http traffic you will probably see a headers with names like: X-Real-IP X-Forwarded-For Sent from my iPhone > On May 23, 2018, at 11:25 PM, Frank Liu <gfrankliu@gmail.com> wrote: > > Since only load balancer sees the client IP, it has to pass that information to nginx. You need to talk to your LB engineer and depending on the type of LB, there are different wby pbooth - Nginx Mailing List - English
5. Do you use keepslive? Sent from my iPhone > On May 20, 2018, at 2:45 PM, Peter Booth <peter_booth@me.com> wrote: > > Rate limiting is a useful but crude tool that should only be one if four or five different things you do to protect your backend: > > 1 browser caching > 2 cDN > 3 rate limiting > 4 nginx caching reverse proxy > > What are your requests?by pbooth - Nginx Mailing List - English
Rate limiting is a useful but crude tool that should only be one if four or five different things you do to protect your backend: 1 browser caching 2 cDN 3 rate limiting 4 nginx caching reverse proxy What are your requests? Are they static content or proxied to a back end? Do users login? Is it valid for dynamic content built for one user to be returned to another? Sent from my iPhone On Mayby pbooth - Nginx Mailing List - English
Quintin, I dont know anything about your context, but your setup looks over simplistic. Here are some things that I learned painfully over a few years of supporting a high traffic retail website 1. Is this a website that's on the internet, and thus exposed to random queries from bots and scrapers that you can’t control? 2. For your cache misses, how long best case, typical and worse case doeby pbooth - Nginx Mailing List - English
I’m guessing that you have script that keeps executing curl. What you can do is use curl -K ./fileWithListOfUrls.txt and the one curl process will visit each url in turn reusing the socket (aka HTTP keep alive) That said, curl isn’t a great workload simulator and, in the long time, you can get better results from something like wrk2 > On 27 Apr 2018, at 11:32 AM, mohan prakash via nginxby pbooth - Nginx Mailing List - English
Does this imply that that different behavior *could* be achieved by first defining virtual IP addresses (additional private IPs defined at the OS) which were bound to same physical NIC, and then defining virtual hosts that reference the different VIPs, in a similar fashion to how someone might configure a hardware load balancer? Sent from my iPhone > On Apr 16, 2018, at 9:32 AM, Maxim Douniby pbooth - Nginx Mailing List - English
So under the covers things are rarely as pretty as one hopes. In the example quoted the influxdb instance was actually a pool of different pre 1.0 instances- each of which had different bugs or fixes. The log script actually pushed 15:30 worth of data to intentionally overlap. The most surprising observation was that substantially more than 50% of the web traffic was from bots, scrapers, test tooby pbooth - Nginx Mailing List - English
Just to be clear, I’m not contrasting active synthetic testing with monitoring resource consumption. I think that the highest value variable is $, or those variables that have highest correlation to profit. The real customer experience is probably #2 after sales. Monitoring things like active connections, cache hit ratios etc is important to understand “what is normal?” It’s easy for ourby pbooth - Nginx Mailing List - English
Jeff, There are some very good reasons for doing things in what sounds like a heavy inefficient manner. The first point is that there are some big differences between application code/business logic and monitoring code: Business logic, or what your nginx instance is doing is what makes you money. Maximizing uptime is critical. Monitoring code typically has a different release cycle, often it wiby pbooth - Nginx Mailing List - English
John, I think that you need to understand what is happening on your host throughout the duration of the test. Specifically, what is happening with the tcp connections. If you run netstat and grep for tcp and do this in a loop every say five seconds then you’ll see how many connections peak get created. If the thing you are testing exists in production then you are lucky. You can do the same inby pbooth - Nginx Mailing List - English
You’re correct that this is the ddos throttling. The real question is what do you want to do? JMeter with zero think time is an imperfect load generator- this is only one complication. The bigger one is the open/closed model issue. With you design you have back ptesssure from your system under test to your load generator. A jmeter virtual user will only ever issue a request when the prior oneby pbooth - Nginx Mailing List - English
I’d use wrk2 or httperf to recreate a spike that hits an http endpoint. If you don’t see a spike but see one with https then you know ssl is one factor. It’s also interesting that this happens st around 23000 connections. If you reduce workr count to one or two And still see max connections around 23000 then it looks another factor is tcp resources. Sent from my iPhone > On Mar 19, 20by pbooth - Nginx Mailing List - English
Two questions: 1. how are you measuring memory consumption? 2. How much physical memory do you have on your host? Assuming that you are running on Linux, can you use pidstat -r -t -u -v -w -C “nginx” to confirm the process’s memory consumption, and cat /var/meminfo to view a detailed description of how memory is being used onto entire host. > On Mar 14, 2018, at 1:05 PM, Matthew Sby pbooth - Nginx Mailing List - English
Suggestion: Define two more locations - one that proxies www.example.com and another that proxies staging.example.com. If both locations work then your problem is probably mirroring. If one doesn’t work then the issue is your configuration and not mirroring. Either way you have reduced the size of your problem space. Peter Sent from my iPhone > On Mar 13, 2018, at 5:58 PM, Kenny Meyer <by pbooth - Nginx Mailing List - English
This is the point where I would jump to using the debug log. You need to build you nginx binary with —with-debug switch and change the log level to debug innginx.conf. Debug generates a *huge* amount of logs but it really is invaluable. I would also want to double check what is actually happening and use ss or tcpdump to confirm that no request is sent to your staging destination. I’m asby pbooth - Nginx Mailing List - English
I agree that avoiding if is a good thing. But avoiding duplication isn’t always good. Have you considered a model where your configuration file is generated with a templating engine? The input file that you modify to add/remove/change configurations could be free of duplication but the conf file that nginx reads could be concrete and verbose Sent from my iPhone > On Mar 7, 2018, at 11:55by pbooth - Nginx Mailing List - English
This discussion is interesting, educational, and thought provoking. Web architects only learn “the right way” by first doing things “the wrong way” and seeing what happens. Attila and Valery asked questions that sound logical, and I think there's value in exploring what would happen if their suggestions were implemented. First caveat - nginx is deployed in all manner different scenariosby pbooth - Nginx Mailing List - English
100GB of cached files sounds enormous. What kinds of files are you caching? How large are they? How many do you have? If you look at your access log what hit rate is your cache seeing? Sent from my iPad > On Feb 16, 2018, at 3:16 AM, Andrzej Walas <nginx-forum@forum.nginx.org> wrote: > > After this inactive logs I have anther logs: > 11371#0: worker process 24870 exited on siby pbooth - Nginx Mailing List - English