On Mar 31, rok wrote: > >sudo ./configure --sbin-path=/usr/local/sbin > > >So if i reconfigure,should backup config before?and what configure i do >now? First off, there is no need to run ./configure as sudo. It is a part of the install process (usually ./configure, make, sudo make install). Secondly, ./configure has nothing to do with config files, it is just a step in the gnuby anomalizer - Nginx Mailing List - English
On Mar 29, Alexander Economou wrote: >I am not really sure if this is the right place to ask something like >that but seeing all those available mods doing almost everything, here >it goes. > >Is there any way (php-cgi mod?) that i can use to measure the time it >takes for a (sql) query to complete? nginx does not execute php scripts. Infact, it has no idea that the proxy/fasby anomalizer - Nginx Mailing List - English
Is there some module that tries to log messages (access log) over the network by plugging into the core event loop? I am use Boost.ASIO, GIO of glib etc. etc. but they all start their own event loop. _______________________________________________ nginx mailing list nginx@nginx.org http://nginx.org/mailman/listinfo/nginxby anomalizer - Nginx Mailing List - English
On Feb 05, Vicente Aguilar wrote: >Hi > >>> I believe better aproach would be to build separate module able to >>> calculate required hash and then use proxy_set_header with >>> appropriate variable. >> >> For whatever it's worth, I agree that we should make things as easy to >> combine as possible, rather than making monolithic modules that coveby anomalizer - Nginx Mailing List - English
On Nov 21, Maxim Dounin wrote: >Hello! > >On Sat, Nov 20, 2010 at 09:48:15PM +0530, Arvind Jayaprakash wrote: > >> Hello, >> From the way I understand if there are 'n' servers listed as part of >> an upstream block & upstream returns failure (as defined by >> proxy_next_upstream), then nginx retries the same request on another >> upstream server andby anomalizer - Nginx Mailing List - English
Hello, From the way I understand if there are 'n' servers listed as part of an upstream block & upstream returns failure (as defined by proxy_next_upstream), then nginx retries the same request on another upstream server and this repeats till every upstream is tried once (excluding those that are considered to be down). If an upstream block is defined with exactly one active server, I see tby anomalizer - Nginx Mailing List - English
On Sep 29, Piotr Sikora wrote: >> Igor said that worker_rlimit_nofile is not enough: >> http://nginx.org/pipermail/nginx/2008-April/004596.html > >It works for me just fine. It works as long as your master process is running as root. Given that nginx is usually listening on port 80, the master process is usually running as root. My pratical observation is that the limits seem tby anomalizer - Nginx Mailing List - English
On Apr 12, Maxim Dounin wrote: >Hello! > >On Mon, Apr 12, 2010 at 10:18:33PM +0530, Arvind Jayaprakash wrote: > >> This module provides a handler called upstream_status that can be used >> as follows: >> >> location /foo { >> upstream_status; >> } >> >> >> It reports all the upstream blocks configured for this server. For >by anomalizer - Nginx Mailing List - English
On Apr 12, Leonardo Crespo wrote: >I just found out that I can do that on linode. Thanks for bringing this >idea up! > >Here's a solution: >Using NFS, mount on the Dynamic server the Static server's directory >for static files (/images /user-uploads etc). Have nginx on Dynamic >with proxy_pass to Static for all static content. All uploads go to >the mounted drive on Dynamicby anomalizer - Nginx Mailing List - English
On Apr 12, Leonardo Crespo wrote: >> (2) Use nginx with caching proxy on the static machine >> In this setup, a cache miss will result in a request for the static >> resource from the dynamic server (which I assume is also capable of >> serving static content). This should work as long you expect to have a >> high cache hit ratio >> > >So user uploads a fiby anomalizer - Nginx Mailing List - English
This module provides a handler called upstream_status that can be used as follows: location /foo { upstream_status; } It reports all the upstream blocks configured for this server. For upstreams managed using the round robin (default upstream), it lists all the backends configured in a block and indicates the current status (up/down) http://github.com/anomalizer/ngx_upstream_status Improvby anomalizer - Nginx Mailing List - English
On Apr 12, Leonardo Crespo wrote: >To make it simpler, here's a scenario: > >Server A is static.domain.com ip 10.10.10.10 servers ALL static >content (jpegs, gifs, pngs, mp3s etc... including user uploaded >content like photo albums) >Server B is www.domain.com ip 10.10.10.20 servers DYNAMIC content, php files. > >If user submits a jpg to his photo album, it'll get uploadedby anomalizer - Nginx Mailing List - English
On Mar 22, David Taveras wrote: >Iam running a reverse proxy.. works great. I was diferrentiating >objects that served locally because they have a 0.00 processing time.. >versus the objects that are fetched from the upstream... however i now >see that this isnt at all true! > >Sometimes in my SSL server i fetch files from cache that actually take >1.10 .. (maybe due to the sslby anomalizer - Nginx Mailing List - English
On Oct 26, Igor Sysoev wrote: >Changes with nginx 0.8.21 26 Oct 2009 > *) Bugfix: socket leak; the bug had appeared in 0.8.11. I know this is sort of old but can someone give me more details on this bug? I shall upgrade my installation but I want to see if a certain issue that I am facing (with an older version) is due to this bug. ________________by anomalizer - Nginx Mailing List - English
On Mar 02, Piotr Sikora wrote: >> The second style is needed in cases where we plan to do some maintanence >> activity on an upstream server and want to proactively not send traffic >> to it. Typical example is when you want to push new software, check with >> a couple of requests and see if your app is behaving well and if all >> looks fine, direct traffic to it. >by anomalizer - Nginx Mailing List - English
On Mar 02, Piotr Sikora wrote: >Yeah, that's probably what Grzegorz meant. You would just need to call >ngx_supervisord_execute(uscf, NGX_SUPERVISORD_CMD_STOP, backend_number, >NULL) and then all ngx_supervisord-aware load balancers (upstream_fair, >round_robin & ip_hash) would automagically stop using failed backend until >you would execute NGX_SUPERVISORD_CMD_START. > &by anomalizer - Nginx Mailing List - English
On Feb 26, Jack Lindamood wrote: >I've written a plugin that can health check nginx backends, which >everyone is free to use. This is similar to the healthchecking >features that varnish and haproxy support. Here's a sample config[1] >just to give you an idea, that uses the upstream_hash plugin. You can >get the code here [2] and an example of how to patch upstream_hash here >by anomalizer - Nginx Mailing List - English
On Feb 04, Max wrote: >I used to use worker_connections but wen I use it, I got a lot of too many >open files in the error log. > >Now, I changed it to worker_rlimit_nofile 10240 and the problem has been >solved (I google it and found the solution). > >I just want to ask, what's the differences between worker_connections and >worker_rlimit_nofile? Why worker_connections wilby anomalizer - Nginx Mailing List - English
AFIAK, nginx does not throtle connections to backends. It will pass on whatever it receives to the backends at the earliest.by anomalizer - Nginx Mailing List - English
On Oct 31, piavlo wrote: >anomalizer Wrote: >------------------------------------------------------- > >Now that I think about cookie based limiting again - it's not clear to >me how new client connections will be handled, by the >connection/request limiting modules, before the application assigns >them a valid cookie? Excellent point. You can never rate limit the connects. Yby anomalizer - Nginx Mailing List - English
On Oct 30, piavlo wrote: >anomalizer Wrote: >------------------------------------------------------- > >> Are you trying to limit genuine or malicious >> users? A malicious user can >> always circumvet the limites by creating his own >> cookies and sending >> them. > >Genuine users of specific application - this why I though that session >should be mby anomalizer - Nginx Mailing List - English
On Oct 29, piavlo wrote: >Hi, >I'd like to limit connections and/or request based on cookies > >Is it possible to do it with something like this: > >limit_req_zone $cookie_somename zone=one:10m rate=1r/s; > >? > >The only thing I've found is http://hg.mperillo.ath.cx/nginx/mod_parsed_vars/file/70df16b39e79/README >but this module has not been updated for 2 yearby anomalizer - Nginx Mailing List - English
On Oct 09, Akins, Brian wrote: >FWIW, in our testing (using something slightly more sophisticated than ab), >we can sustain about 70k requests per second in nginx. (Obviously, perfect >network, small objects, fast clients, etc.) This is on normal "server" >hardware. Could you tell us what did you use instead of ab? I've always found the benchmark tool to be a limitation whby anomalizer - Nginx Mailing List - English
> RewriteRule /catalog/(.*) /catalog/$1 > RewriteRule /lib/(.*) /lib/$1 > RewriteRule /skins/(.*) /skins/$1 > RewriteRule /payments/(.*) /payments/$1 > RewriteRule /images/(.*) /images/$1 > RewriteRule /index\.php(.*) /index.php$1 > RewriteRule /admin\.php(.*) /admin.php$1 > RewriteRule ^(.*)$ /index.php?sef_rewrite=1 Do you realize that everything exceby anomalizer - Nginx Mailing List - English
On Sep 23, Avleen Vig wrote: > On Sep 21, 2009, at 1:02, François Battail <fb@francois.battail.name> > wrote: > >> Le dimanche 20 septembre 2009 à 22:47 -0700, Khalid Shaikh a écrit : >> >>> worker_processes 32; >> >> That's way too much, try to keep the number of workers sticked to the >> number of cores (eg: 4). > > Surely you shoulby anomalizer - Nginx Mailing List - English
On Sep 24, Barry Abrahamson wrote: > > On Sep 17, 2009, at 5:49 AM, John Moore wrote: > >> It certainly does, thanks! Could I trouble you to explain a little more >> about your use of Wackamole and Spread? I've not used either of them >> before. > > There is a How-to here: > > http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproby anomalizer - Nginx Mailing List - English
On Sep 04, Chang Song wrote: > > imcaptor. > Man, you are a lifesaver. > > That changes everything. I build tomcat with apr native, and nginx comes > back alive and well!! > > Nginx beats Apache by at least 10-20% and the response time better by > 10-20% > > I will have check other workloads as well, but nginx holds up pretty well. > > This is great, Thankby anomalizer - Nginx Mailing List - English
I am trying to do some tricks with upstream+proxy and ran into what seems like a limitation of the proxy_set_header feature. When an upstream's response triggers resending the request to the next upsteam, I was hoping $upstream_response_time is available with data of what happened in the previous upsteams. I'm trying to pass it using the following directive: proxy_set_header X-retry1 $upby anomalizer - Nginx Mailing List - English
On Aug 23, Igor Sysoev wrote: >On Fri, Aug 21, 2009 at 08:13:03AM +0530, Arvind Jayaprakash wrote: > >> The English wiki says the embedded perl module is highly experimental >> followed by a lot of warnings. Is this still true? > >Currently it's not highly experimental: we and some other use it in >production. However, you should not try to work with other serversby anomalizer - Nginx Mailing List - English
The English wiki says the embedded perl module is highly experimental followed by a lot of warnings. Is this still true?by anomalizer - Nginx Mailing List - English
![]() |
![]() |
![]() |
![]() |
![]() |