We're using Nginx as ingress controller in a Kubernetes environment. Nginx uses a DNS service inside k8s with 1 single service-IP and multiple DNS PODs behind that. The translation from service-IP to any of the multiple DNS PODs is done via DNAT rules. The problem: - Nginx establishes an UDP 'connection' on for example localIP 1.1.1.1 source-port 12345 towards serviceIP 1.2.3.4 - DNAT tranby jeanpaul - Nginx Mailing List - English
Hi! I'm using Nginx as a proxy to Apache. I noticed some messages in my error.log that I cannot explain: 27463#0: *125209 no live upstreams while connecting to upstream, client: x.x.x.x, server: www.xxx.com, request: "GET /xxx/ HTTP/1.1", upstream: " http://backend/xxx/", host: "www.xxx.com" The errors appear after Apache returned some 502-errors; however in the coby jeanpaul - Nginx Mailing List - English
Hi, With help from the Naxsi maillist I found that my idea is indeed not possible. Naxsi doesn't process subrequests, so that's why it didn't work as I expected. It seems to be on the roadmap to change this behavior. My workaround for now it to move the two rulesets into different server blocks in Nginx: Serverblock 1 listening on port 8080 makes the decision to send the request to the strict oby jeanpaul - Nginx Mailing List - English
Hi, I have updated the config to use 'map' instead of the if-statements. That's indeed a better way. The problem however remains: - Naxsi mainrules are in the http-block - Config similar to: map $geoip_country_code $ruleSetCC { default "strict"; CC1 "relaxed"; CC2 "relaxed"; } location /strict/ { include /usr/local/nginx/by jeanpaul - Nginx Mailing List - English
Hi Aziz, True; this got lost during my copy-anonymize-paste process. The real config doesn't have this. Thanks so far, JP On Sun, Nov 12, 2017 at 2:34 PM, Aziz Rozyev <arozyev@nginx.com> wrote: > at least you’re missing or (|) operator between > > > TRUSTED_CC_2 and TRUSTED_CC_3 > > > > br, > Aziz. > > > > > > > On 12 Nov 2017, at 14:03by jeanpaul - Nginx Mailing List - English
Hi! I'm using Nginx together with Naxsi; so not sure it this is the correct place for this post, but I'll give it a try. I want to configure two detection thresholds: a strict detection threshold for 'far away countries', and a less-strict set for local countries. I'm using a setup like: location /strict/ { include /usr/local/nginx/naxsi.rules.strict; proxy_pass http://app-server/;by jeanpaul - Nginx Mailing List - English
I think this solves the issue: http://hg.nginx.org/nginx/rev/9552758a786e Thanks, JP On Wed, Mar 15, 2017 at 11:05 AM, Jean-Paul Hemelaar <hemelaar@desikkel.nl> wrote: > Hi, > > I noticed a delay of approx. 200ms when the proxy_cache_background_update > is used and Nginx sends stale content to the client. > > Current setup: > - Apache webserver as backend sending a sby jeanpaul - Nginx Mailing List - English
Hi, I have a similar issue: http://mailman.nginx.org/pipermail/nginx/2017-March/053198.html I noticed (using tcpdump) that all data except the last package is send immediately. Can you verify it that's happening in your case as well? JP On Wed, Apr 5, 2017 at 1:32 PM, IgorR <nginx-forum@forum.nginx.org> wrote: > Hello, > > I'm trying to configure nginx to use proxy_cache_backgrby jeanpaul - Nginx Mailing List - English
Hi, I noticed a delay of approx. 200ms when the proxy_cache_background_update is used and Nginx sends stale content to the client. Current setup: - Apache webserver as backend sending a slow response delay.php that simply waits for 1 second: <?php usleep(1000000); ?> - Nginx in front to cache the response, and send stale content it the cache needs to be refreshed. - wget sending a requestby jeanpaul - Nginx Mailing List - English
Hi Maxim, I verified the patch and it seems to work! Thanks for your prompt solution on this. JPby jeanpaul - Nginx Mailing List - English
Hi Maxim, I stripped down my configuration and removed 'unneeded' parts to reproduce. I'm able to reproduce it with the following settings: location / { # Added to mitigate the issue. Removed for testing #rewrite ^/index.html$ / break; proxy_pass http://backends; proxy_next_upstream error timeout invalid_header; proxy_buffering on; proxy_connect_tby jeanpaul - Nginx Mailing List - English
Hi, The proxy_cache_key uses request parameters by default. As stated in http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key it uses $scheme$proxy_host$request_uri by default. The $request_uti do contain the request parameters: http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_uri $request_uri full original request URI (with arguments) So a wayby jeanpaul - Nginx Mailing List - English