What is a reasonable value for upstream zone size? I'm just shooting in the dark with 64k right now. Running 64bit Linux. The official NGINX documentation does not elaborate on it, and I can't find anything useful on Google. upstream backends { zone example_zone 64k; keepalive 8l; server 10.20.30.2 max_fails=3 fail_timeout=30s; }by justink101 - Nginx Mailing List - English
Thanks for linking, but nginx-module-vts seems over-kill and I'm concerned about performance. Essentially we are building a product that charges by egress bandwidth and looking for a way to track it at the NGINX level. I was digging a bit further and it seems like using https://www.nginx.com/blog/launching-nginscript-and-looking-ahead/ might be a good solution. Anybody tried this? Anybody have a sby justink101 - Nginx Mailing List - English
Is there a way to measure and store the amount of egress bandwidth in GB a given server{} block uses in a certain amount of days? Needs to be somewhat performant. Using NGINX Unit or Lua are both possible, just no idea how to implement it.by justink101 - Nginx Mailing List - English
Here is the configuration: http { limit_conn_zone $binary_remote_addr zone=limitapinoauth:16m; limit_conn_zone $remote_user zone=limitapi:32m; map $remote_user $limit_zone { default limitapi; '' limitapinoauth; } map $remote_user $limit_number { default 100; '' 200; } } server { limit_conn $limiby justink101 - Nginx Mailing List - English
Arg, sorry for the typos. I really wish this forum allowed edits.by justink101 - Nginx Mailing List - English
If I want to include all config files within a directly, and all child directories what is the syntax: If is still: include /etc/nginx/*.conf or is it: include /etc/nginx/**/*.confby justink101 - Nginx Mailing List - English
If we have multiple server blocks binding on https using SPDY, reuseport, and deferred nginx fails to start complaining about port already bound: server { listen 443 deferred ssl spdy reuseport; server_name app.foo.com; ... } server { listen 443 deferred ssl spdy reuseport; server_name frontend.bar.com; ... } What is the behavior then if we change to: seby justink101 - Nginx Mailing List - English
Hello, saw this logged in the error log in our NginxPlus (nginx/1.7.11 (nginx-plus-extras-r6-p1) load balancer. Any ideas? 2015/09/08 14:31:02 2399#0: *452322 zero size buf in output t:0 r:0 f:1 0000000000000000 0000000000000000-0000000000000000 0000000002F51428 0-0 while sending request to upstreamby justink101 - Nginx Mailing List - English
We use a dynamic value for access logs: access_log /var/log/nginx/domains/$host/access.log main; However, if the $host directory does not exist in /var/log/nginx/domains nginx fails with an error creating the access log. Is there a way to have nginx create the $host directory automatically instead of failing? Seems like this should be default behavior?by justink101 - Nginx Mailing List - English
According to the documentation getting the value of $server_addr to set a response header makes a system call, and can impact performance negativelyset $ip $server_addr; server { location /health { add_header Backend $server_addr; return 200; } } Would the following be a better solution, and eliminate the system call on every request?by justink101 - Nginx Mailing List - English
Is it possible to undo a server level deny all; inside a more specific location block? See the following: server { allow 1.2.3.4; allow 2.3.4.5; deny all; location / { location ~ ^/api/(?<url>.*) { # bunch of directives } location = /actions/foo.php { # bunch of directives } location = /actby justink101 - Nginx Mailing List - English
Thanks Igor. What if one of the servers listed in the upstream block should be over https and the other over http? How is this done using upstream proxies { server foo.mydomain.io; server bar.mydomain.com; } proxy_pass https://proxies/api/; Notice the proxy pass defines only a single scheme (https).by justink101 - Nginx Mailing List - English
Is it possible to specify multiple proxy_pass destinations from a single location block? Currently we have: location ~ ^/v1/?(?<url>.+)? { resolver 208.67.222.222 208.67.220.220 valid=300s; resolver_timeout 10s; proxy_intercept_errors off; proxy_hide_header Vary; proxy_set_header Host "foo.mydomain.io"; proxy_set_header X-Real-IP $remote_addr;by justink101 - Nginx Mailing List - English
Any plans to support Google QUIC[1] in nginx? [1] http://en.wikipedia.org/wiki/QUICby justink101 - Nginx Mailing List - English
Just started getting 502 Bad Gateway and in the nginx error log I see: upstream sent invalid status "" while reading response header from upstream When I restarted php-fpm, everything went back to normal and working. Did something with php-fpm just go wrong? I don't see anything in the php-fpm error logs. What typically causes this, and what are some fixes to prevent it from happenby justink101 - Php-fpm Mailing List - English
Setting: proxy_ssl_verify_depth 2; Fixed the issue. Can somebody explain why this is needed and why the default setting is 1? I am using a standard wildcard SSL certificate from GoDaddy. Thanksby justink101 - Nginx Mailing List - English
Sorry, the proxy_ssl_ciphers directive got cut off, in full it is: proxy_ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";by justink101 - Nginx Mailing List - English
I am trying to use proxy_ssl_verify on, but I am getting back 502 Bad Gateway. When I look at the logs I see: 2014/08/12 18:08:03 21007#0: *3 upstream SSL certificate verify error: (20:unable to get local issuer certificate) while SSL handshaking to upstream, client: XX.XXX.XXX.214, server: api.mydomain.io, request: "GET /v1 HTTP/1.1", upstream: "https://XXX.XXX.XXX.150:443/api/by justink101 - Nginx Mailing List - English
Starting php-fpm: [07-Jul-2014 17:52:33] WARNING: listen.backlog(0) was too low for the ondemand process manager. I updated it for you to 128 Well that is unfortunate, not sure why using on-demand required a backlog of 128. Essentially this php-fpm pool runs jobs then the workers automatically exit. So essentially they spawn run and die. pm = ondemand pm.max_children = 100 pm.process_idleby justink101 - Nginx Mailing List - English
Maxim, If I set the php-fpm pool listen.backlog to 0, will this accomplish what I want. I.e. fill up workers, once all the workers are used, fail requests.by justink101 - Nginx Mailing List - English
I have a php-fpm pool of workers which is 6. There are long running requests being sent, so I have the following fastcgi directives set: fastcgi_connect_timeout 15; fastcgi_send_timeout 1200; fastcgi_read_timeout 1200; However right now, if the php-fpm pool of workers is full, a request waits the full 20 minutes. I'd like requests to fail with a 502 status code if the php-fpm pool of workeby justink101 - Nginx Mailing List - English
How can I read a POST request body which is JSON and get a property? I need to read a property and use it as a variable in proxy_pass. Pseudo code: $post_request_body = '{"account": "test.mydomain.com", "more-stuff": "here"}'; // I want to get $account = "test.mydomain.com"; proxy_pass $account/rest/of/url/here;by justink101 - Nginx Mailing List - English
Is it possible using nginx to essentially look at the http referer header, and if its set to a specific value, and the page is index.html or /, redirect to a custom landing page. For example: # Psuedo code if($page = "index.html" and $http_referer ~* (www\.)?amazon.com.*) { rewrite ^ "our-amazon-landing-page.html" permanent; }by justink101 - Nginx Mailing List - English
Noticed that the proxy request response headers are being thrown away in our 404 block. Note that the proxied request is returning 404. If I try and fetch a header that I know is being returned from the proxy it is undefined. location @four_o_four { internal; more_set_headers "X-Host: $sent_http_x_host"; return 404 '{ "error": { "status_code":by justink101 - Nginx Mailing List - English
Thanks for the replies and sorry about the delay in responding. This is what we ended up using: error_page 404 = @four_o_four; location @four_o_four { internal; more_set_headers "X-Host: web4.ourdomain.com"; more_set_headers "Content-Type: application/json"; return 404 '{"status":"Not Found"}'; }by justink101 - Nginx Mailing List - English
How can I return a custom JSON body on 404, instead of the default html of: <html> <head> <title>404 Not Found</title> </head> <body bgcolor="white"> <center> <h1>404 Not Found</h1> </center> <hr> <center>nginx</center>by justink101 - Nginx Mailing List - English
Lee we switched to using memcached for sessions and this helped, but still seeing blocking, though less time. If we open two tabs, in the first page fire an ajax request that takes 20+ seconds to run, then in the second tab refresh, the page blocks loading in the second tab, but now instead of waiting the entire 20+ seconds for the first tab (ajax request) to finish, it only blocks around 8 secby justink101 - Nginx Mailing List - English
Hi Lee. Yes using PHP. Could we simply just call session_write_close() immediately after we open and verify the session details? I'd like to avoid adding another piece of infrastructure (redis) on every web server.by justink101 - Nginx Mailing List - English
Maxim. Even after disabling SPDY and restarting nginx, still seeing the same behavior with requests blocking if another single request is outstanding in another tab.by justink101 - Nginx Mailing List - English
I am seeing super strange behavior and I am absolutely stumped. If I open up two tabs in Google Chrome (34), and in the first refresh our application (foo.ourapp.com), which makes an ajax requests (via jQuery) that takes 20 or so seconds to complete. Then in the other new tab hit refresh on (foo.ourapp.com), the second tab blocks waiting until the ajax request on the first tab finishes. Inspectingby justink101 - Nginx Mailing List - English