You should never let the users get into queue anyway, it is for unexpected peaks. # Total amount of users you can serve = worker_processes * worker_connections http://stackoverflow.com/questions/7325211/tuning-nginx-worker-process-to-obtain-100k-hits-per-min On Wed, Jun 25, 2014 at 10:24 AM, ashishadhav <nginx-forum@nginx.us> wrote: > Hi, > I want to find out how many requests aby roy.enjoy - Nginx Mailing List - English
A note to the history that is good to know. Thanks Maxim, even though we choose not to use unicode urls to make things less complicated. On Fri, May 9, 2014 at 4:13 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Sun, May 04, 2014 at 06:42:39PM +0300, kirpit wrote: > > > Hi, > > > > I'm doing some sort of downstream cache that I save every entirby roy.enjoy - Nginx Mailing List - English
Hi, I'm doing some sort of downstream cache that I save every entire page into memcached from the application with urls such as: www.example.com/file/path/?query=1 then I'm fetching them from nginx if available with the config: location / { # try to fetch from memcached set $memcached_key "$host$request_uri"; memcached_pass localhost:11by roy.enjoy - Nginx Mailing List - English
This was quite tricky! # apt-show-versions libssl-dev libssl-dev/wheezy upgradeable from 0.9.8o-4squeeze14 to 1.0.1e-2 Thanks a lot for the help mate. On Tue, Jul 30, 2013 at 6:15 PM, Valentin V. Bartenev <vbart@nginx.com>wrote: > On Tuesday 30 July 2013 17:51:21 kirpit wrote: > > Hi, > > > > I simply read every article on the net, followed the docs but simply &gby roy.enjoy - Nginx Mailing List - English
Hi, I simply read every article on the net, followed the docs but simply can't get SPDY support working. It's running on Debian Squeeze (that I have to) so I upgraded OpenSSL to the version 1.0.1e from Wheezy repo. Downloaded the latest stable Nginx 1.4.2 then compiled with the configuration options: ../configure \ --with-ipv6 \ --with-http_ssl_module \ --with-http_spdy_module \ --with-http_gziby roy.enjoy - Nginx Mailing List - English
# www fix server { # www.example.com -> example.com #server_name www.example.com; #rewrite ^ $scheme://example.com$request_uri? permanent; # www.example.com -> example.com server_name example.com; rewrite ^ $scheme://www.example.com$request_uri? permanent; } https://github.com/kirpit/webstack/blob/master/sites/_nginx-example.com.conf cheerby roy.enjoy - Nginx Mailing List - English
yeah, your uwsgi instance is not working or somehow socket is simply not there. and for the application-agnostic configuration; you basically have to pass some parameters to your upstreams about the current request and then handle these params from them. see there is a similar example on nginx documentation: http://wiki.nginx.org/Configuration#Python_via_uWSGI go dig nginx and uwsgi config filesby roy.enjoy - Nginx Mailing List - English
+1 for websocket waiting list.. On Tue, Dec 18, 2012 at 8:46 PM, Ian Hobson <ian.hobson@ntlworld.com> wrote: > On 17/12/2012 08:29, Maxim Konovalov wrote: > >> Hi Nick, >> >> On 12/16/12 11:58 AM, Nick Zavaritsky wrote: >> >>> Hi! >>> >>> According to the roadmap at http://trac.nginx.org/nginx/**roadmaphttp://trac.nginx.org/nginx/rby roy.enjoy - Nginx Mailing List - English
Hi, I'm just trying to setup a non-blocking server backed with uwsgi/gevent. So I need to turn off uwsgi buffering as it's necessary. However, it seems quite impossible to get it working even though it says "46382#0: *1 http upstream process non buffered downstream" within the debug output and I started to consider that would be a bug. I've tried both with v1.2.5 and v1.2.4. Here the oby roy.enjoy - Nginx Mailing List - English