# HG changeset patch # User Gunnlaugur Thor Briem <gunnlaugur@gmail.com> # Date 1413305660 0 # Tue Oct 14 16:54:20 2014 +0000 # Node ID 3674e10a9e22a622998b65badfe01da34579bb65 # Parent 2096ecf6de02bc9e8ae920c45c59bf6a4e2e38fb Clarify meaning of limit_conn in SPDY connections Be clear about limit_conn applying to concurrent requests, not connections. The distinction matters for SPDY cby gthb - Nginx Development
Hello, Because uwsgi_cache_key has no default value (or rather, has the empty string as its default value), a configuration with uwsgi_cache set but uwsgi_cache_key not set behaves in a way that is very unlikely to be desired: Nginx caches the first publicly cacheable response it gets from upstream (for any request), and serves that cached response to *any* request mapping to the same location.by gthb - Nginx Mailing List - English
Yep, works like a charm, thank you! And two consecutive ifs to strip two cookies works as well: set $stripped_cookie $http_cookie; if ($http_cookie ~ "(.*)(?:^|;)\s*sessionid=[^;]+(.*)$") { set $stripped_cookie $1$2; } if ($stripped_cookie ~ "(.*)(?:^|;)\s*csrftoken=[^;]+(.*)$") { set $stripped_cookie $1$2; } Cheers, Gulliby gthb - Nginx Mailing List - English
Hi, is it possible to hide one request cookie (but not all, so proxy_set_header Cookie "" is not the way) when proxying to an upstream server? The use case is: * website foo.com uses a hosted service on a subdomain, e.g. blog.foo.com hosted by Wordpress.com * horror: MSIE will send all foo.com cookies to the subdomain too, leaking sessions (not just to Wordpress.com but to evby gthb - Nginx Mailing List - English
Hi, in a single server block listening on both 80 and 443 ssl, currently in production, I want to start redirecting all HTTP GET requests to HTTPS ... but keep serving non-GET requests on HTTP for a little while, so as not to bork form posts and such made by clients from pages loaded on HTTP before the change. This can probably be accomplished by either: (a) using the kludgy multi-conditiby gthb - Nginx Mailing List - English
Hi, > Trivial workaround is to use "uwsgi_buffers 8 64k" instead. > Or you may try the following patch: Thank you! I tried the uwsgi_buffers workaround in production, and the patch in my reproduction setup, and indeed both seem to fix this problem; the request runs to completion with no memory growth. > - your backend app returns data in very small chunks, thus thereby gthb - Nginx Mailing List - English
Hi, here's a minimal configuration where I can reproduce this: error_log debug.log debug; events { worker_connections 1024; } http { uwsgi_buffers 64 8k; upstream nginx-test.uwsgi { server 10.0.0.7:13003; least_conn; } server { listen 8080; server_name nginx-test.com; location /api/ { include uby gthb - Nginx Mailing List - English
Hi, I finally reproduced this, with debug logging enabled --- I found the problematic request in the error log preceding the kill signal, saying it was being buffered to a temporary file: 2014/07/21 11:39:39 21182#0: *32332838 an upstream response is buffered to a temporary file /var/cache/nginx/uwsgi_temp/9/90/0000186909 while reading upstream, client: x.x.x.x, server: foo.com, requestby gthb - Nginx Mailing List - English
> How do you track "nginx memory"? What I was tracking was memory use per process name as reported by New Relic nrsysmond, which I'm pretty sure is RSS from ps output, summed over all nginx processes. > From what you describe I suspect that disk buffering occurs (see > http://nginx.org/r/uwsgi_max_temp_file_size), and the number you > are looking at includes the sizeby gthb - Nginx Mailing List - English
Hi, Several times recently, we have seen our production nginx memory usage flare up a hundred-fold, from its normal ~42 MB to 3-4 GB, for 20 minutes to an hour or so, and then recover. There is not a spike in number of connections, just memory use, so whatever causes this, it does not seem to be an increase in concurrency. The obvious thing to suspect for this is our app's newest change, whiby gthb - Nginx Mailing List - English