Thanks Maxim -- that's actually the page that has led me to ask these questions. Since the content of that page is a bit general, I was hoping to get some more specific detail about how all the pieces are connected so that I could optimize my NGINX hash map setup as much as possible. Thanks for any additional information you can provide!by abstein2 - Nginx Mailing List - English
I was hoping someone could clarify how exactly map_hash_bucket_size and map_hash_max_size should be set and the impact it has on memory. For map_hash_bucket_size, it says it should be a multiple of the processor's line cache size. Under what circumstances does it make sense or would it be necessary to move away from the default cache size? For map_hash_max_size, is this just the maximum sizeby abstein2 - Nginx Mailing List - English
Is there any way to limit the maximum size of an individual object in a proxy cache? Looking through the documentation ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html ) I'm not seeing anything directly related to that. I might be misunderstanding the proxy_temp_file_write_size or proxy_max_temp_file_size commands, but outside of limiting the entire cache size with proxy_cache_path, Iby abstein2 - Nginx Mailing List - English
I'm trying to better wrap my head around the keepalive functionality in the upstream module as when enabling keepalive, I'm seeing little to no performance benefits using the FOSS version of nginx. My upstream block is: upstream upstream_test_1 { server 1.1.1.1 max_fails=0; keepalive 50; } With a proxy block of: proxy_set_header X-Forwarded-For $IP; proxy_set_header Host $http_host;by abstein2 - Nginx Mailing List - English
Maxim, Thanks so much for clarifying. Just to make sure I'm understanding correctly, if I had something like this pseudo-code upstream upstream1 { } upstream upstream2 { } upstream upstream3 { } upstream upstream4 { } upstream upstream5 { } server { server_name server1.com; proxy_pass http://upstream1; } server { server_name server2.com; proxy_pass http://upstream2; } server { serveby abstein2 - Nginx Mailing List - English
I'm somewhat unclear about how the keepalive functionality works within the upstream module. My nginx install currently handles several hundred domains all of which point to different origin servers. I would imagine I can improve performance by enabling keepalive, however the documentation says "The connections parameter sets the maximum number of idle keepalive connections to upstream serverby abstein2 - Nginx Mailing List - English
Every so often I see a handful of errors in my error log, such as: connect() failed (113: No route to host) upstream timed out (110: Connection timed out) upstream sent too big header while reading response header from upstream etc. in each case, when I log the $status variable in nginx, each just shows as a 502 error. Is there any way to retrieve what the actual error is (via variable?by abstein2 - Nginx Mailing List - English
Awesome -- thanks so much for the quick reply!by abstein2 - Nginx Mailing List - English
Is there any negative performance impact with chaining include commands on nginx? For example, are any of these worse than any of the others from a performance perspective: In nginx.conf: include domain_config_1.conf; include domain_config_2.conf; OR In nginx.conf: include domain_configs.conf; In domain_configs.conf: include domain_config_1.conf; include domain_config_2.conf;by abstein2 - Nginx Mailing List - English
The script is an ASPX script and, to my knowledge, it doesn't use sessions. I don't control the script, but I can't duplicate the behavior when running against the proxied server. It only occurs when going through the proxy. I don't believe sessions are the issue.by abstein2 - Nginx Mailing List - English
I'm having an issue where I proxy a long running script and receive a 504 error when it exceeds my proxy_read_timeout setting. All of that's behaving normally -- what isn't behaving normally is that the next several requests I make to the domain via the same proxy code also return 504s after timing out, despite the fact that the request should complete properly. The first script that runs takesby abstein2 - Nginx Mailing List - English
Based on your post, I was actually dug a little bit deeper because there was nowhere in my Perl I could find that returned 1. After disabling most of the Perl, I was getting 499 errors which made sense, the client was closing the connection. It looks like part of the issue is that attached to the location I have a post_action that runs another perl method. This Perl method doesn't have a retby abstein2 - Nginx Mailing List - English
Sorry for the delay. I do think part of the issue is tied to the load balancer since that connection is timing out (we set it very low for testing purposes), but the load balancer terminating the connection doesn't explain why nginx is returning a 001 status code, since the connection from the nginx box to the origin box shouldn't be affected by that. nginx -V: nginx version: nginx/1.2.3 buby abstein2 - Nginx Mailing List - English
I can't find documention anywhere on what it means when nginx shows 001 as the value of $status in the access_log. I currently use nginx as a reverse proxy and I get this error when uploading large files (2+ MB though my client_max_body_size is 4 MB) . Also worth noting, the follow values per my log files: $upstream_status: - $upstream_response_time: - $request_completion: Right nowby abstein2 - Nginx Mailing List - English
I have e-mailed Yao Weibin, but wanted to give an update here regarding my findings. It appears the issue is linked to long strings of text unbroken by a new line. With my settings, it appears that if a line contains ~40k bytes without being broken by a new line, something occurs whether gzip is on or off, that causes the gibberish to appear on the page. Thank you all for your input so far.by abstein2 - Nginx Mailing List - English
It looks like there's an issue with the newest revision of the module and nginx 1.2.3. When installed, whether gzip is on or off, the portion of code that was previously missing/not transmitted now gets transmitted, but isn't the actual page content. Instead it's gibberish with some of the raw nginx configuration mixed in. An example of the code being output: Xv±t™Xv±t™ ý ýby abstein2 - Nginx Mailing List - English
As the title says, I'm having an issue with gzipping proxied web pages after using subs_filter. It doesn't always happen, but it looks like whenever a page exceeds the size of one of my proxy_buffers I get an error in the error log saying: 18544#0: *490084 deflate() failed: 2, -5 while sending to client, client: xxx.xxx.xxx.xxx, server: www.test.com, request: "GET / HTTP/1.1", upstreby abstein2 - Nginx Mailing List - English