What is your ultimate goal? You say that you want to replay 0.05% of traffic into a test environment. Are you wanting to capture real world data on a one off or ongoing basis? You say that this particular proxy is very busy. How busy? Is it hosted on a physical host or a virtual machine? If physical, do you own the physical environment? If you do, then you can capture the (entire) content withby pbooth - Nginx Mailing List - English
During a busier part of the day, what is your minimum, median,99%, max requests per sec? > On Jul 30, 2017, at 9:31 AM, Vlad K. <nginx-ml@acheronmedia.hr> wrote: > > >> If you open the status page in a browser do the numbers report match >> what you see with netstat? > > Waiting does: > > # netstat -n | grep -E "tcp4|tcp6" | grep ESTABLISHED |by pbooth - Nginx Mailing List - English
See below > On Jul 30, 2017, at 6:12 AM, Vlad K. <nginx-ml@acheronmedia.hr> wrote: > > On 2017-07-30 11:26, Peter Booth wrote: >> I just reread the thread and realize that you answered q2, and that >> makes the graph even more >> surprising. You say that it son FreeBSD - does this mean that you >> don’t have /proc available to you? >> Is there a prby pbooth - Nginx Mailing List - English
I just reread the thread and realize that you answered q2, and that makes the graph even more surprising. You say that it son FreeBSD - does this mean that you don’t have /proc available to you? Is there a procstat or other way to see the equivalent of /proc/<pid>/fd - a list of all open file descriptions for a specific pid? > On Jul 30, 2017, at 5:15 AM, Peter Booth <peter_booth@by pbooth - Nginx Mailing List - English
Vlad, You might not need to replicate it- you have it happening in production in front of you. Some questions: 1. When is the last time that your production nginx was restarted? 2. Do you have regular restarts? 3. Is there an obstacle to restarting at some point? 4. Is this a single instance or do you have multiple nginx hosts? 5. What 3rd party models are you using? 6. Is the website in questioby pbooth - Nginx Mailing List - English
Vlad, I'd suggest beginning by seeing whether or not this is real. If you create a cron job that invokes netstat -ant every hour, then summarize the connections and either view them manually or write them into an influxdb and graph with grafana you will see whether or not the #tcp connections really is growing and, if so, which connections are growing. That would seem like a useful first step.by pbooth - Nginx Mailing List - English
Ryan, Just to be pedantic, can you spell out exactly what you meant when you said "and deliver future responses as 304 to clients even without the If-Modified-Since header?" What requests were triggering the 304 response? Were you observing what a browser was seeing or were you using curl or wget to trigger the response? Peter Sent from my iPhone > On Jul 26, 2017, at 5:10 AM, Ryby pbooth - Nginx Mailing List - English
I can’t see an obvious issue, but I can say that there is no such thing as a simple web server setup where caching is involved. I have gray hairs that appeared after working with a high traffic retail website that had seven levels of caching (browser cache, CDN, hardware load balancer, nginx reverse proxy, servlets that write content, tangosol /oracle coherence, endeca caching) I’m hoping thaby pbooth - Nginx Mailing List - English
Phillip, Right now this Rails website is almost too slow to tune, and so you will need to make some radical changes that you might later choose to undo. You should run the rails app in production mode, which, by default will cache everything. That should give you th breathing room needs dto run other tools. Peter > On Jul 21, 2017, at 6:58 AM, Peter Booth <peter_booth@me.com> wroteby pbooth - Nginx Mailing List - English
It looks as if the static content is being served by the Rails asset pipeline rather than directly by nginx and the impact is enormous. It took 25s for the base page - but it also took another 25s for the http://cryonics.org.au/assets/application.js http://cryonics.org.au/assets/application.js resource and another 20s for http://cryonics.org.au/assets/bg.gif http://cryonics.org.au/assets/bg.gifby pbooth - Nginx Mailing List - English
stale-while-revalidate is awesome, but it might not be the optimal tool here. It came out of Yahoo!, the sixth largest website in the world, who used a small number of caching proxies. In their context most content is served hot from cache. A cloud deployment typically means a larger number of VMs that are each a fraction of a physical server. Great for fine grained control but a problem forby pbooth - Nginx Mailing List - English
Perhaps it would help if, rather than focus on the specific solution that you are wanting, you instead explained your specific problem and business context? What is driving your architecture? Is it about protecting a backend that doesn't scale or more about reducing latencies? How many different requests are there that might be cached? What are the backend calls doing? How do cached objects expiby pbooth - Nginx Mailing List - English
You could do that but it would be bad. Nginx' great performance is based on serving files from a local Fisk and the behavior of a Linux page cache. If you serve from a shared (nfs) filsystem then every request is slower. You shouldn't slow down the common case just to increase cache hit rate. Sent from my iPhone > On Jul 7, 2017, at 9:24 AM, Frank Dias <frank.dias@prodea.com> wrote: >by pbooth - Nginx Mailing List - English
Depends on your definition of pretty and what you want to achieve. Are you looking for pretty for a human reader or for a browser? Google's pagespeed module comes in both apache and nginx flavors and applies a bunch of page optimization transformations to the page and embedded resources. I've seen it reduce download times for an untuned site from 6 seconds to 1.2. But the HTML that's returned hasby pbooth - Nginx Mailing List - English
What is your ultimate goal here? What are you wanting to prevent? Sent from my iPhone > On Jul 4, 2017, at 4:01 AM, guruprasads <nginx-forum@forum.nginx.org> wrote: > > Hi, > > I am trying to tune nginx server. > I want to restrict number of client connection per server and restrict > bandwidth. > I tried > worker_connections 2; > for max connections in ngby pbooth - Nginx Mailing List - English
What happens if you simplify the match string to only contain characters? Something like >> sub_filter 'xxx' 'yyy'; Can it ever do a substitute? Sent from my iPad > On Jul 2, 2017, at 8:26 AM, Valentin V. Bartenev <vbart@nginx.com> wrote: > >> On Friday 30 June 2017 22:14:55 ptcell wrote: >> I've built with the sub filter enabled and I'm finding it hangs requestby pbooth - Nginx Mailing List - English
I had best caching experience when I started using the openresty nginx bundle. It's a build of nginx that contains a bunch of Lua modules that make it a lean application server. With that I could create cache keys that exactly matched my (complex) business requirements Sent from my iPhone > On Jun 27, 2017, at 5:43 PM, deivid__ <nginx-forum@forum.nginx.org> wrote: > > Hi. >by pbooth - Nginx Mailing List - English
David, Are the backend resources actually dynamic / created on demand, or are they "real" files that exist on a slow file system? Peter Sent from my iPhone > On Jun 27, 2017, at 12:56 PM, deivid__ <nginx-forum@forum.nginx.org> wrote: > > I mistakenly typed redirect to /nfs because it redirects to /converted which > has an alias to /nfs. > > The files are delby pbooth - Nginx Mailing List - English
I've found that the easiest , most accurate way of diagnosing cache related issues is to use the incredible rebot.org service. If you can point redbot at your nginx, and also at your back end, it will identify anything that prevenst the resource being cacehable. If your website isnt visible from the internet you can either install your own copy of redbot or use an ssh tunnell to make it visible tby pbooth - Nginx Mailing List - English
From experience this stuff is a lot harder and more nuanced than it might seem. Google's agents are well behaved and obey robots.txt. The last high traffic website I worked on had over 250 different web spiders/bots scraping it.. That's 250 different user agents that didn't map to a "real" browser. Identifying them required multiple different techniques, looking at request patterns. It'sby pbooth - Nginx Mailing List - English
This might not be a bug at all. Remember that when nginx logs request time it's doing so with millisecond precision. This is very, very coarse-grained when you consider what modern hardware is capable of. The Tech Empower benchmarks shwo that an (openresty) nginx on a quad-socket host can server more than 800,000 dynamic lua requests per second. We should expect that static resources served frby pbooth - Nginx Mailing List - English
FWIWI have never understood the desire to have nginx configuration spread across multiple files. It just seems to invite error and make it harder to see what is going on. Perhaps if I worked for a hosting company I’d feel differently but on the sites that I have worked on, even with quite complicated, subtle caching logic the entire nginx.conf has been under 600 lines - not that different fromby pbooth - Nginx Mailing List - English
Wow- I really like the sound of naxsi. In the past I've used F5's ASM, the WAF built on their big-ip platform. It was powerful though prone to false positives. I don't believe there are any real shortcuts that allow you to build an effective waf without understanding the details of your own website. These simply aren't build, deploy and forget devices. It sounds a if the creator of naxsi understanby pbooth - Nginx Mailing List - English
Ryan, What is the topology of the system that you are describing? You mention kong/nginx, an upstream host, a load balancer, clients ... Are the load balancers hardware or software devices? Is kong nginx simply forwarding to a load-balancer VIP that fronts multiple upstream systems? Are there any firewalls or intrusion detection systems also in the mix? Are your clients remoby pbooth - Nginx Mailing List - English
There's "can you?" and there's "should you?" My attitude is that life is short, so I want to avoid building any opportunities to break. Imagine that you deploy your N web apps. There can be a real value in being able to access the web app directly when debugging, and avoiding the web server layer. (for example, if your web server is also a caching reverse proxy) That means thaby pbooth - Nginx Mailing List - English
Seth, It's actually very easy to reproduce this issue - from a browser request http://musikandfilm.com/?a=b and you will see it. There are a couple of low level tools that expose some possible issues. If you email me directly I can talk about this in more detail. Try peter underscore booth at me dot com. Sent from my iPhone > On May 8, 2017, at 7:17 PM, seth2958 <nginx-forum@forum.nginby pbooth - Nginx Mailing List - English
So I have a few different thoughts: 1. Yes nginx does support SSL pass through . You can configure nginx to stream your request to your SSL backend. I do this when I don't have control of the backend and it has to be SSL. I don't think that's your situation. 2. I suspect that there's something wrong with your SSL configuration and/or your nginx VMs are underpowered. Can you test the throughputby pbooth - Nginx Mailing List - English
Just to be pedantic. It’s counterintuitive but, in general, tmpfs is not faster than local storage, for the use case of caching static content for web servers. Sounds weird? Here’s why: tmpfs is a file system view of all of the system’s virtual memory - that is both physical memory and swap space. If you use local storage for your cache store then every time a file is requested for the fiby pbooth - Nginx Mailing List - English
Yes you can. For some subtle custom cache logic I needed to use openresty, which is an nginx bundle that adds a lot of customization points. Sent from my iPhone > On Feb 8, 2017, at 5:47 PM, Chad Hansen via nginx <nginx@nginx.org> wrote: > > I use nginx as a reverse proxy, and upstream clients have a need for my service to cache differently than downstream servers. > > Isby pbooth - Nginx Mailing List - English
I've always had to configure and build debug versions myself - and usually I want them to coexist in parallel with an existing production nginx install. But this link suggests otherwise: http://nginx.org/en/docs/debugging_log.html You'll be overwhelmed by the volume of output. It gave me a real appreciation for the subtlety, power, and necessary complexity of nginx and the technical skills of thby pbooth - Nginx Mailing List - English