Hello! On Sun, Apr 07, 2024 at 01:36:21PM +0200, Kirill A. Korinsky wrote: > Greetings, > > Let assume that I would like behavior on LB from the backend and force it to > cache only resposnes that have a X-No-Cache header with value NO. > > Nginx should cache a response with any code, if it has such headers. > > This works well until the backend is unavailable and nginby Maxim Dounin - Nginx Mailing List - English
Greetings, Let assume that I would like behavior on LB from the backend and force it to cache only resposnes that have a X-No-Cache header with value NO. Nginx should cache a response with any code, if it has such headers. This works well until the backend is unavailable and nginx returns a hardcoded 502 that doesn't have a control header, but such a response is cached anyway. Here is the confby Kirill A. Korinsky - Nginx Mailing List - English
Hi NGINX-users, I am running nginx version: nginx/1.25.3 (nginx-plus-r31-p1 on Rocky 9.3 in a lab, trying to get OIDC authentication working to KeyCloak 23.0.7. Attached are the relevant files /etc/nginx.conf and included /etc/nginx/conf.d files, most of which are from the nginx-openid-connect github repo (https://github.com/nginxinc/nginx-openid-connect). Keycloak and nginx are running on tby Christopher Paul - Nginx Mailing List - English
I think I've found a bug (or I've just been staring at code too long) regarding proxy_cache_bypass and possibly proxy_no_cache. Essentially my config is something like this below. At the top server level caching only occurs for guests (no userid cookie set), while connections that have a userid cookie set get bypassed (and also it won't save those requests to a cache). I have a specific folderby Jas0n - Nginx Mailing List - English
Hello! On Fri, Nov 04, 2022 at 04:01:22PM +0100, basti wrote: > we have a website with some embedded content to YT. So the idea is to > setup a GDPR Proxy. > > Setup: > > User Client -> example.com (embedded content media.example.com) -> YT > > So YT only can see the IP of media.example.com. > > What's about cookies? > Can YT track the 'User Client'? &by Maxim Dounin - Nginx Mailing List - English
Hello, we have a website with some embedded content to YT. So the idea is to setup a GDPR Proxy. Setup: User Client -> example.com (embedded content media.example.com) -> YT So YT only can see the IP of media.example.com. What's about cookies? Can YT track the 'User Client'? Something like that should be enough, I think: location /media/(.*)$ { proxy_pass https://media.example.coby basti - Nginx Mailing List - English
hi there My front end is nginx using reverse proxy work with my backend, I was trying to cache images files only, but it seems doesn't work at all, the *$no_cache* always output default value "proxy", which should be "0" when I visit image files here it is my config map $upstream_http_content_type $no_cache { > default proxy; > "~*image" 0; > } > proxy_by Drweb Mike - Nginx Mailing List - English
I should also add that currently this works really well for every distinct $request_url by: location / { … error_page 500 502 503 504 /maintenance.html; … proxy_cache_valid 500 502 503 504 3m; } location = /maintenance.html { root /etc/nginx/static; } _______________________________________________ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to ngiby Serban Teodorescu - Nginx Mailing List - English
Hello, I’d like to configure nginx to put up a standard maintenance page depending on the status code received from the upstream (currently for 502, 503 and 504, cached for 3 minutes). This works pretty simple and well, but I would like to ensure the maintenance page is being served directly on all urls. Currently the cache_key contains $request_uri ar the end, of course, to make sure tby Serban Teodorescu - Nginx Mailing List - English
Hello! On Wed, Jul 13, 2022 at 08:12:06PM +0400, Roman Arutyunyan wrote: > Hi, > > On Sun, Jul 10, 2022 at 11:35:48AM +0300, Maxim Dounin wrote: > > Hello! > > > > On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote: > > > > > I’m having an nginx instance where I utilise the nginx slice > > > module to slice upstream mpby Maxim Dounin - Nginx Mailing List - English
Hi, On Sun, Jul 10, 2022 at 11:35:48AM +0300, Maxim Dounin wrote: > Hello! > > On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote: > > > I’m having an nginx instance where I utilise the nginx slice > > module to slice upstream mp4 files when using proxy_cache. > > > > However, I have an interesting origin where if sending a range >by Roman Arutyunyan - Nginx Mailing List - English
You’re truly awesome! I’ll give the patch a try tomorrow - and thanks for the other bits and pieces of information, especially regarding the expectations as well. I wish you an awesome Sunday! Best Regards, Lucas Rolff > On 10 Jul 2022, at 10:35, Maxim Dounin <mdounin@mdounin.ru> wrote: > > Hello! > > On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrby Lucas Rolff - Nginx Mailing List - English
Hello! On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote: > I’m having an nginx instance where I utilise the nginx slice > module to slice upstream mp4 files when using proxy_cache. > > However, I have an interesting origin where if sending a range > request (which happens when the slice module is enabled), to a > file that’s less than the slice rangeby Maxim Dounin - Nginx Mailing List - English
Hi guys, I’m having an nginx instance where I utilise the nginx slice module to slice upstream mp4 files when using proxy_cache. However, I have an interesting origin where if sending a range request (which happens when the slice module is enabled), to a file that’s less than the slice range, the origin returns a 200 OK, but with the range related headers such as content-range, but obvioby Lucas Rolff - Nginx Mailing List - English
Triggering proxy_cache_bypass on a request that was previously cached, will serve a fresh response from upstream. But if that response now returns a Cache-Control: no-cache header, the old cached response is not replaced nor cleared. Which means that subsequent requests that do not trigger proxy_cache_bypass will keep serving an old response. I guess this is an intended behavior, because proxby Gabriel Finkelstein - Nginx Mailing List - English
On Thu, Nov 11, 2021 at 08:54:21AM +0000, Francis Daly wrote: > On Wed, Nov 10, 2021 at 08:58:54PM +0200, Reinis Rozitis wrote: Typo/thinko-fixes... > You could possibly also take advantage of case#2, and do > > rewrite /something/(.*\.xml)$ /$1 break; rewrite /something/(.*\.xml)$ /data/$1 break; Along with "proxy_pass http://origin;". > but that feels a bit tooby Francis Daly - Nginx Mailing List - English
On Wed, Nov 10, 2021 at 08:58:54PM +0200, Reinis Rozitis wrote: Hi there, > > And I can't make a location block for a mimetype, or using another specifier than regexes to filter out requests to certain 'file types'. Is there any other 'good' solution except for, on my origin adding rewrites from /something/data1/ to /data1/? Assuming that other proxy criteria are the same, I think that thby Francis Daly - Nginx Mailing List - English
> And I can't make a location block for a mimetype, or using another specifier than regexes to filter out requests to certain 'file types'. Is there any other 'good' solution except for, on my origin adding rewrites from /something/data1/ to /data1/? Why just not separate the locations rather make them nested? Something like: location /something/ { proxy_cache disk_cache; proxy_pass hby Reinis Rozitis - Nginx Mailing List - English
Hi there, I use nginx as caching reverse proxy. I have requests coming as /something/data1/request123.xml I have other requests coming in as /something/data1/bigfile424.bin I want to forward all requests to an origin on /data1/ Currently I use a location block: location /something/ { proxy_cache disk_cache; proxy_pass http://origin/data1/; } This works great! Except the disk is ratherby Michiel Beijen - Nginx Mailing List - English
On Sun, Jun 06, 2021 at 02:14:33PM +0530, Amila Gunathilaka wrote: Hi there, > > The simplest-to-understand fix, assuming that this is a test system where > you are happy to start again, is probably to stop nginx, remove the > /var/lib/nginx/proxy/ directory and all of its > - contents, create the directory again as the user that nginx runs as, > and then start nginx. >by Francis Daly - Nginx Mailing List - English
Dear Mr Francis, issue 1.) > That is almost certainly because you also have "proxy_cache" ( http://nginx.org/r/proxy_cache) and "proxy_cache_path" defined, but configured to use part of the filesystem that the nginx user - is not allowed to use -- maybe it was created or first run as one user, and now this user cannot write there? > The simplest-to-understand fix,by amiladevops - Nginx Mailing List - English
Hi, I have a question, I wanna use nginx and ffmpeg to serve chunks to clients without using or sending .m3u file to client. How can i do this lease? * ffmpeg copy streams in local ( in /home/STREAMS/channel/stream%d.ts ==> /home/STREAMS/channel/stream1.ts , /home/STREAMS/channel/stream2.ts , /home/STREAMS/channel/stream3.ts ....) * I want nginx to serve clients chunk by chunk in a continuousby Fatma MAZARI - Nginx Mailing List - English
On Tue, Jun 01, 2021 at 07:40:27PM +0530, Amila Gunathilaka wrote: Hi there, > Hope you are doing good ? Thanks for your quick responses for my emails > again. I have 02 questions for you today, I will brief it down for your > ease. You're welcome. In general, if the questions are unrelated to the first one, it's best to start a new mail. That'll help someone search for questionsby Francis Daly - Nginx Mailing List - English
> If you nevertheless observe 500 being returned in practice, this might be the actual thing to focus on. Even with sub 100 requests and 4 workers, I've experienced it multiple times, where simply because the number of cache keys got exceeded, it was throwing 500 internal server errors for new uncached requests for hours on end (The particular instance, I have about 300 expired keys per 5 miby Lucas Rolff - Nginx Mailing List - English
Hi All, Any update for my issue guys ? 2. Help: Using Nginx Reverse Proxy bypass traffic in to a application running in a container (Amila Gunathilaka) Thanks On Tue, May 18, 2021 at 4:44 PM <nginx-request@nginx.org> wrote: > Send nginx mailing list submissions to > nginx@nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > httpby Amila Gunathilaka - Nginx Mailing List - English
Hello! On Mon, May 17, 2021 at 07:33:43PM +0000, Lucas Rolff wrote: > Hi Maxim! > > > - The attack you are considering is not about "poisoning". At > > most, it can be used to make the cache less efficient. > > Poisoning is probably the wrong word indeed, and since nginx > doesn't really handle reaching the limit of keys_zone, it simply > starts to rby Maxim Dounin - Nginx Mailing List - English
Hi Maxim! > - The attack you are considering is not about "poisoning". At most, it can be used to make the cache less efficient. Poisoning is probably the wrong word indeed, and since nginx doesn't really handle reaching the limit of keys_zone, it simply starts to return a 500 internal server error. So I don't think it's making the cache less efficient (Other than you won't beby Lucas Rolff - Nginx Mailing List - English
Hi Maxim, Thanks a lot for your reply! I'm indeed aware of the ~8k keys per mb of memory, I was just wondering if it was handled differently when min_uses are in use, but it does indeed make sense that nginx has to keep track of it somehow, and the keys zone makes the most sense! > Much like with any cache item, such keys are removed from the keys_zone if no matching requests are seenby Lucas Rolff - Nginx Mailing List - English
Hello! On Sun, May 16, 2021 at 04:46:17PM +0000, Lucas Rolff wrote: > Hi everyone, > > I have a few questions regarding proxy_cache and the use of > proxy_cache_min_uses in nginx: > > Let’s assume you have an nginx server with proxy_cache enabled, > and you’ve set proxy_cache_min_uses to 5; > > Q1: How does nginx internally keep track of the count for > miby Maxim Dounin - Nginx Mailing List - English
Hi everyone, I have a few questions regarding proxy_cache and the use of proxy_cache_min_uses in nginx: Let’s assume you have an nginx server with proxy_cache enabled, and you’ve set proxy_cache_min_uses to 5; Q1: How does nginx internally keep track of the count for min_uses? Is it using SHM to do it (and counts towards the key_zone limit?), or something else? Q2: How long time doby Lucas Rolff - Nginx Mailing List - English