Ok thanks, my problem isnt with the matching order (bad explination on my part) but more with the matching procedure e.g location ~ to_cache { proxy_cache ....; proxy_pass ....; } location ~ not_to_cache { proxy_cache off; proxy_pass ....; } / If the url /abc-not_to_cache-def/ is passed to nginx does the server execute both the matches (to_cache, then not_to_cache with not_by splitice - Nginx Mailing List - English
Ok, Ive been working on a project for a long time. It involves automated config file building for the nginx web server :) Now im adding caching. The user of the software can using the interface select regex's to apply caching logic to. Ive devided the cache types into 3 types: 1. CACHE: Cache if possible 2. FORCE-CACHE: Cache always, ignore Cache-Control headers 3. NEVER CACHE all other trafficby splitice - Nginx Mailing List - English
Try files refers to local files so that wont work. You need a failover in the upstream. On Tue, Oct 11, 2011 at 8:39 PM, keith <keith@scott-land.net> wrote: > We have nginx setup as a reverse proxy and on one of the backend servers we > have a Sharepoint website that we want to failover to a Wordpress website > hosted on another server if there's a problem with Sharepoint website.by splitice - Nginx Mailing List - English
This is an interesting feature, would be awsome if this got included in the core. Can it be made to work based of DELETE (method) requests I wonder? On Fri, Oct 7, 2011 at 1:11 AM, ahu <nginx-forum@nginx.us> wrote: > I need some help on "FASTCGI(PHP) CACHE PURGE". > > I successfully configured my nginx to support purge fastcgi cache with > Ctrl+F5,while I found that Iby splitice - Nginx Mailing List - English
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby splitice - Nginx Mailing List - English
sounds brilliant but where is the patch? On Tue, Sep 27, 2011 at 10:24 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > The following patch introduces less intrusive upstream server recheck after > server was considered dead. > > Specifically, only one request is allowed once fail_timeout passes, and > upstream server is only marked "live" once thisby splitice - Nginx Development
Sorry I should have rephrased my question does it include it by default (or do we still need a configure statement) On Tue, Sep 20, 2011 at 10:37 PM, Sirsiwal, Umesh <usirsiwal@verivue.com>wrote: > From the announcement: > >> *) Feature: the ngx_http_upstream_keepalive module. > > -Umesh > > ________________________________________ > From: nginx-devel-bounces@by splitice - Nginx Development
Does this release include the upstream keepalive module or is it just the patch? On Tue, Sep 20, 2011 at 10:29 PM, Sirsiwal, Umesh <usirsiwal@verivue.com>wrote: > Thanks to both of you. Great work. > > If I understand correctly, the only way to use upstream_keepalive module is > with upstream module. This implies I cannot use upstream_keepalive where the > upstream is identiby splitice - Nginx Development
Yay 1.1.4 will be an awsome release :) On Fri, Sep 16, 2011 at 11:14 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Fri, Sep 16, 2011 at 01:25:47PM +0100, António P. P. Almeida wrote: > > > I've tried building 1.1.3 using a bunch of patches provided by Maxim. > > > > They're here: https://github.com/perusio/nginx-mdounin-patches > > >by splitice - Nginx Development
wow, this + last-modified and etags would really cut into the reason people use varnish in their nginx + varnish setup. On Fri, Sep 16, 2011 at 3:51 AM, MagicBear <magicbearmo@gmail.com> wrote: > Hello, > I have wrote a module to make nginx support 304 to decrease bandwidth > usage. > note: I have a newbie for nginx module development, so the above module may > have some proby splitice - Nginx Mailing List - English
wow, makes me want to rebuild all my server nodes to include this patch. On Fri, Sep 9, 2011 at 8:32 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > # HG changeset patch > # User Maxim Dounin <mdounin@mdounin.ru> > # Date 1315564269 -14400 > # Node ID b667ed67c0b9046b94291fa6f52e850006011718 > # Parent 014764a85840606c90317e9f44f2b9fa139cbc8b > Buffers reuse in chunby splitice - Nginx Development
:) Hoping this gets included in 1.1.3 On Tue, Sep 6, 2011 at 3:54 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Sun, Sep 04, 2011 at 03:33:47PM +0400, Maxim Dounin wrote: > > > Hello! > > > > Here is the keepalive patch queue, posting it here for further > > review and testing. Note this series is for nginx 1.1.1, first 2 > > patcheby splitice - Nginx Development
a) not nginx related b) not even php-fpm related its a slow mysql query. 2011/9/5 LiuXin <sy.meteor@msn.com> > Hi all, > After upgrade from php 5.3.6 to php 5.3.8, now php-fpm is very very slow. > Even I tried to roll back to php 5.3.6, it still doen't work. > And there are lots of errors in php-fpm.log, > > [04-Sep-2011 12:28:27] WARNING: child 18793, script > 'by splitice - Nginx Mailing List - English
edit the php script to do something like this before the rest of the script $_GET['g2_path'] = urldecode($_GET['g2_path']) ? On Sat, Sep 3, 2011 at 11:21 AM, signe <nginx-forum@nginx.us> wrote: > Ubuntu Natty > nginx/0.8.54 > PHP 5.3.5 / FPM / FastCGI > > I'm just beginning to work with nginx for the first time. Converting my > home server (very few hits) as an experimby splitice - Nginx Mailing List - English
I havent had any crashes on 3x10mbit servers (90% utilisation) or 2x100mbit servers (20% utilisation). Seems very stable. On Thu, Sep 1, 2011 at 6:04 AM, magicbear <nginx-forum@nginx.us> wrote: > OK, It works for 3 days with 40 million request haven't coredump. > Thanks. > > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! >by splitice - Nginx Mailing List - English
If you where to create a web service (JSON?) for clamav checking then you could use asyncronous lua requests to do the virus checking. On Mon, Aug 29, 2011 at 4:45 PM, Calin Don <calin.don@gmail.com> wrote: > Take note that while you are processing the content using embedded Perl or > lua, the nginx worker which is processing is blocked, thus not serving > anything. It migth be aby splitice - Nginx Mailing List - English
Oh and I havent been able to reproduce the crash, I tried for a while but gave up. if it happens again ill build with debugging and restart howeaver so far its been 36hours without issues (under a significant amount of traffic) On Mon, Aug 8, 2011 at 7:35 PM, SplitIce <mat999@gmail.com> wrote: > 50ms per HTTP request (taken from firebug and chrome resource panel) as the > time it takby splitice - Nginx Mailing List - English
50ms per HTTP request (taken from firebug and chrome resource panel) as the time it takes the html to load from request to arrival. 200ms is the time saved by when the http starts transfering to me (allowing other resources to begin downloading before the HTML completes), previously the html only started transfering after the full request was downloaded to the proxy server (due to buffering) HTTPby splitice - Nginx Mailing List - English
Been testing this on my servers now for 2 days now, handling approximately 100mbit of constant traffic (3x20mbit, 1x40mbit). Havent noticed any large bugs, had an initial crash on one of the servers however havent been able to replicate. The servers are a mixture of openvz, XEN and one vmware virtualised containers running debian lenny or squeeze, Speed increases from this module are decent, appby splitice - Nginx Mailing List - English
Ive been testing this on my localhost and one of my live servers (http backend) for a good week now, I haven't had any issues that I have noticed as of yet. Servers are Debian Lenny and Debian Squeeze (oldstable, stable) Hoping it will make it into the developer (1.1.x) branch soon :) On Wed, Aug 3, 2011 at 1:57 PM, liseen <liseen.wan@gmail.com> wrote: > Hi > > Could nginx keepaby splitice - Nginx Mailing List - English
lookup proxy_cache_path. Cache is stored in a folder structure then. As its filebased I persume rsync would work, although I am not sure of the exact implementation in nginx it may not be picked up by the cache loaded unless you reload. On Tue, Aug 2, 2011 at 12:43 PM, lennydizzy <nginx-forum@nginx.us> wrote: > Hi, > > I am a newbie Nginx user, so bear with me here ;-) > >by splitice - Nginx Mailing List - English
sflow would be great it it was open source and had an easily customizable server (perl/python/bash or PHP) On Tue, Aug 2, 2011 at 5:08 AM, Harold Sinclair <haroldsinclair@gmail.com>wrote: > I cobbled something like this together with open source tools and have been > using it on hundreds of servers.. pls contact me offline if you'd like a > copy :) > > -Harold > > >by splitice - Nginx Mailing List - English
great work Igor and all contributors. Look forward to using all the new features in the 1.1.x branch. :) On Tue, Aug 2, 2011 at 1:13 AM, Igor Sysoev <igor@sysoev.ru> wrote: > Changes with nginx 1.1.0 01 Aug > 2011 > > *) Feature: cache loader run time decrease. > > *) Feature: "loader_files", "loader_sleep"by splitice - Nginx Mailing List - English
I beleive the only way to do this is to custom write a script that reads the log files and then issues a reload signal to nginx (something like logrotate). If anyone does know of a script id be interested to learn too as I do this on my servers using a hacked togeather php script and logrotate (on crontab every minute) On Mon, Aug 1, 2011 at 11:53 PM, John Macleod <jcdmacleod@gmail.com> wrby splitice - Nginx Mailing List - English
proxy cache key needs to include accept encoding as php does the gzip for phpbb3 On Mon, Aug 1, 2011 at 12:42 PM, Nicholas Sherlock <n.sherlock@gmail.com>wrote: > Hi everyone, > > I'm using gzip and proxy_cache together, proxying to an Apache backend. > Some of my clients are complaining that they are getting gzipped content > which their browser is displaying without un-gziby splitice - Nginx Mailing List - English
I personally use rsync for this purpose. On Sat, Jul 30, 2011 at 5:15 AM, John Macleod <JMacleod@alentus.com> wrote: > This is more of a best practice question. We need to reliably sync our > nginx config between multiple front end servers. Just curious what other do > in this regard. > > Rsync does what it says on the tin but would prefer something a little more > realby splitice - Nginx Mailing List - English
Ah thanks, in which case it probably is wise to use the dynamic spawning method with a guarunteed amount of 32 processes. (max of 2-3x that or something along those lines) I guess. By the way Ive tested this on HTTP backends a little without any obvious errors, im planning on loading it on a live server later this week for some real testing. Will this be packaged as a module in the future or do yby splitice - Nginx Development
Correct me if im wrong but wouldnt the correct value to use for keepalive be 31(/workers) in this case? On Sat, Jul 30, 2011 at 1:36 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Fri, Jul 29, 2011 at 03:43:56PM +0200, Thomas Love wrote: > > > > > > > > On 26 July 2011 13:57, Maxim Dounin <mdounin@mdounin.ru> wrote: > > > > &by splitice - Nginx Development