This is the entry in the nginx.conf which is using proxy cache . I dont see any option here to configure hashing algorithm location /nginx-picture { internal; proxy_buffering on; proxy_cache media; proxy_cache_key $uri$args; proxy_cache_valid 200 43200s; proxy_ignore_headers Expires; proxy_ignore_headers Cache-Control; add_header X-Cache-Status $upstream_cache_sby kirti maindargikar - Nginx Mailing List - English
Hi, We are using 1.10.3 nginx in FIPS mode. As discussed above we already have FIPS enabled on RHEL and we have recompiled nginx with OpenSSL FIPS. However we still see that Nginx is using MD5 algorithms ( which is not allowed in FIPS mode ) when we use proxy_cache to cache pictures . Looks like nginx uses MD5 hash to create the name of the cached image file. As given in this link httpby kirti maindargikar - Nginx Mailing List - English
Hello! On Wed, Jun 19, 2019 at 10:39:45AM -0700, Roger Fischer wrote: > I am using NGINX (1.17.0) as a reverse proxy with cache. I want > the cache to be updated even when the client closes the > connection before the response is delivered to the client. > > Will setting proxy_ignore_client_abort to on do this? When caching is enabled, nginx will ignore connection close by thby Maxim Dounin - Nginx Mailing List - English
Hi Patrick, This is the nginx config, do you think that i should use another method? like auth? user www; worker_processes auto; pid /var/run/nginx.pid; worker_rlimit_nofile 1048576; events { worker_connections 1024; } http { include mime.types; default_type text/html; log_format custom_cache_log '$remote_addr - $remote_user [$time_local] ' 'by Andrew Andonopoulos - Nginx Mailing List - English
On 2019-05-15 07:10, flierps wrote: > Yes, upstream behaves as you would expect. > > Right now Nginx proxy_valid is set to 1 second. After that second Nginx > revalidates with upstream and upstream will respond with 304 if applicable. > > I just do not want Nginx to serve from cache during that second. It always > needs to revalidate. Provided that: a) the servers are time-syby Patrick - Nginx Mailing List - English
nginx version: nginx/1.10.3 uname: Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux VPS: Linode $5 I set about 900 subsite like this: proxy_cache_path /var/cache/nginx/abc.com/aaa levels=1:2 use_temp_path=off keys_zone=aaa.abc.com:64k inactive=8h max_size=128m; proxy_cache_path /var/cache/nginx/abc.com/bbb levels=1:2 use_temp_path=off keys_zone=bbb.abc.com:by 村长 - Nginx Mailing List - English
Hello, nginx writes the rsponse from a proxy to disk. eg. [...] Server: nginx Date: Mon, 11 Mar 2019 23:23:28 GMT Content-Type: image/png Content-Length: 45360 Connection: close Expect-CT: max-age=0, report-uri=" https://openstreetmap.report-uri.com/r/d/ct/reportOnly" ETag: "314b65190a8968893c6c400f29b13369" Cache-Control: max-age=126195 Expires: Wed, 13 Mar 2019 10:26:43 GMTby Manuel - Nginx Mailing List - English
I am wondering if Nginx will ever support caching without buffering responses? Buffering the full response before sending the data out to client increases the first byte latency (aka TTFB). In a perfect world if nginx can stream the data to the cache file and to the client simultaneously that would solve the TTFB issues. From experience i know that squid follows this methodology. I am curious whyby loopback_proxy - Nginx Mailing List - English
Hi all, I would like to setup Nginx as a caching reverse proxy but with explicit requests in the URL and rewriting all subsequent requests Don;t know, if it really counts as reverse proxy and if it is understandable, so an example ;) For an original URL like https://org.url.baz/user/repo/foo I would like to be able to cache all request through nginx running at my.domain.foo but with an exby Thomas Hartmann - Nginx Mailing List - English
Hello! Thanks again for the pointers. I have caching enabled, and the purpose of this is to set different expire times based on the request (if it's cacheable). So I have 3 locations: 1 for frontpage, 1 for dynamic pages and another for static content. I can't use your example though because it will ignore those headers even for requests which shouldn't be cached, hence the $skip_cache variable cby Andrei - Nginx Mailing List - English
Hello! On Tue, Jan 08, 2019 at 09:55:30AM +0200, Andrei wrote: > Is there a way to conditionally use proxy_ignore_headers? I'm trying to > only ignore headers for requests which have $skip_cache = 0 for example If you want different proxy_ignore_headers settings for different requests, you have to use different location{} blocks for these requests. You can do so either by using distincby Maxim Dounin - Nginx Mailing List - English
Hi, On Thu, Dec 06, 2018 at 04:01:36PM +0300, Roman Arutyunyan wrote: [..] > This should solve the issue: > > location ~ /test/($<name>regular|expression)$ { > proxy_pass http://127.0.0.1:8010/test/$name; Sorry, the right syntax is of course this: location ~ /test/(?<name>regular|expression)$ { proxy_pass http://127by Roman Arutyunyan - Nginx Mailing List - English
Hello Richard, On Tue, Dec 04, 2018 at 06:57:15PM +0100, Richard Stanway via nginx wrote: > Hello, > I'm running into an issue where a proxied location with a regular > expression match does not correctly update the cache when using > proxy_cache_background_update. The update request to the backend seems > to be missing the captured parameters from the regex. I've created a > smby Roman Arutyunyan - Nginx Mailing List - English
Hello, I'm running into an issue where a proxied location with a regular expression match does not correctly update the cache when using proxy_cache_background_update. The update request to the backend seems to be missing the captured parameters from the regex. I've created a small test case that demonstrates this in nginx 1.15.7. Hopefully I'm not missing anything, I checked the docs and didn't sby Richard Stanway via nginx - Nginx Mailing List - English
Hi Francis, I sent the wrong snip. the correct is using $upstream_http_content_type as can be seen below. basically, always when I use "proxy_cache_bypass $no_cache;" that impact the value of "map $upstream_http_content_type $no_cache".... I didn't understand what is the reason. thanks for any suggestions. http { include /etc/nginx/mime.types; default_type applicaby Jorge Pereira - Nginx Mailing List - English
On Thu, Nov 29, 2018 at 09:51:13PM -0200, Jorge Pereira wrote: Hi there, > I am using the nginx/1.12.0 and I am trying to use the below config. > but, the below "map" by "$upstream_http_content_type" is always > matching with default value "1". but, if I remove "proxy_cache_bypass" > then the map it works. therefore, I need the "proxy_cachby Francis Daly - Nginx Mailing List - English
Hi, I am using the nginx/1.12.0 and I am trying to use the below config. but, the below "map" by "$upstream_http_content_type" is always matching with default value "1". but, if I remove "proxy_cache_bypass" then the map it works. therefore, I need the "proxy_cache_bypass " capability. http { include /etc/nginx/mime.types; default_typeby Jorge Pereira - Nginx Mailing List - English
Hi Lucas, On Wed, Nov 14, 2018 at 06:50:23PM +0000, Lucas Rolff wrote: > Hi Roman, > > I can confirm that indeed does fix the problem, thanks! > > I do wonder though, why not let nginx make the decision instead of relying on what the origin sends or does not send? nginx tries to be transparent and do not introduce any changes in the response and behavior of the origin unless expby Roman Arutyunyan - Nginx Mailing List - English
Hi Roman, I can confirm that indeed does fix the problem, thanks! I do wonder though, why not let nginx make the decision instead of relying on what the origin sends or does not send? Thanks! On 14/11/2018, 17.36, "nginx on behalf of Roman Arutyunyan" <nginx-bounces@nginx.org on behalf of arut@nginx.com> wrote: Hi, On Wed, Nov 14, 2018 at 02:36:10PMby Lucas Rolff - Nginx Mailing List - English
Hi, On Wed, Nov 14, 2018 at 02:36:10PM +0000, Lucas Rolff wrote: > Hi guys, > > I've been investigating why byte-range requests didn't work for files that are cached in nginx with proxy_cache, I'd simply do something like: > > $ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4 > > What would happen was that the full length of a file would be returned, despite being in theby Roman Arutyunyan - Nginx Mailing List - English
Hi guys, I've been investigating why byte-range requests didn't work for files that are cached in nginx with proxy_cache, I'd simply do something like: $ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4 What would happen was that the full length of a file would be returned, despite being in the cache already (I know that the initial request, you can't seek into a file). Now, after invby Lucas Rolff - Nginx Mailing List - English
Hello, On Wed, Oct 17, 2018 at 05:13:15AM -0400, drookie wrote: > Hello, > > I did't find the answer in documentation, but am I right, assuming from my > observation, that when the proxy_cache is enabled for a location, and the > client requests the file that isn't in the cache yet, nginx starts > transmitting this file only after it's fully received from the upstream ? > Bby Roman Arutyunyan - Nginx Mailing List - English
Hello, I did't find the answer in documentation, but am I right, assuming from my observation, that when the proxy_cache is enabled for a location, and the client requests the file that isn't in the cache yet, nginx starts transmitting this file only after it's fully received from the upstream ? Because I'm seeing the lags equal to the request_time from the upstream. If I'm right, is thereby drookie - Nginx Mailing List - English
Hi, On 2018-09-25 20:32, Lahiru Prasad wrote: > What is the best way to cache POST requests in Nginx. I'm familiar > with using redis module to cache GET requests. But is it possible to > use the same for POST ? It's possible to cache POST-requests, but it's generally not something you want to do. POST-data is in most cases more sensitive than GET-data, for, for instance, lby Daniël Mostertman via nginx - Nginx Mailing List - English
On 09/18/2018 02:55 AM, Maxim Dounin wrote: > Hello! > > On Tue, Sep 18, 2018 at 12:10:22AM +0200, Pierre Couderc wrote: > >> I did use wrongly a 301 redirect.... >> >> I have corrected now, but the redirect remains. >> >> I >> In no particular order: >> >> - There are no log lines in the access log coresponding to the >> requestby Pierre Couderc - Nginx Mailing List - English
Hello! On Tue, Sep 18, 2018 at 12:10:22AM +0200, Pierre Couderc wrote: > I did use wrongly a 301 redirect.... > > I have corrected now, but the redirect remains. > > I use wget : > > nous@pcouderc:~$ wget https://www.ppp.fr > --2018-09-17 23:52:44-- https://www.ppp.fr/ > Resolving www.ppp.fr (www.ppp.fr)... 2a01:e34:eeaf:c5f0::fee6:854e, > 78.234.252.95 > Cby Maxim Dounin - Nginx Mailing List - English
Hi Lucas, Thank you for this. GEM all over. I didn’t know curl had –resolve. This is a more a generic question: How does one ensure cache consistency on all edges? Do people resort to a combination of expiry + background update + stale responding? What if one edge and the origin was updated to the latest and I now want all the other 1000 edges updates within a minute but the content expiby Quintin Par - Nginx Mailing List - English
> The cache is pretty big and I want to limit unnecessary requests if I can. 30gb of cache and ~ 400k hits isn’t a lot. > Cloudflare is in front of my machines and I pay for load balancing, firewall, Argo among others. So there is a cost per request. Doesn’t matter if you pay for load balancing, firewall, argo etc – implementing a secondary caching layer won’t increase yourby Lucas Rolff - Nginx Mailing List - English
Hi Peter, Here are my stats for this week: https://imgur.com/a/JloZ37h . The Bypass is only because I was experimenting with some cache warmer scripts. This is primarily a static website. Here’s my URL hit distribution: https://imgur.com/a/DRJUjPc If three people are making the same request, they get identical content. No personalization. The pages are cached for 200 days and inactive in prby Quintin Par - Nginx Mailing List - English
Quintin, Are most of your requests for dynamic or static content? Are the requests clustered such that there is a lot of requests for a few (between 5 and 200, say) URLs? If three different people make same request do they get personalized or identical content returned? How long are the cached resources valid for? I have seen layered caches deliver enormous benefit both in terms of performance aby Peter Booth via nginx - Nginx Mailing List - English