Hello, I am caching the result of auth_request. This is the simplified code: proxy_cache_path /var/cache/nginx levels=1 keys_zone=token_cache:1m max_size=2m inactive=60m use_temp_path=off; location /devices { auth_request /auth; auth_request_set $token $upstream_http_x_token; proxy_set_header Authorization $token;by bouvierh - Nginx Mailing List - English
Hi, I'm using Nginx as a reverse-proxy to cache my POST request and wrote the following config: http { gzip on; gzip_proxied any; gzip_types text/plain application/json; gzip_min_length 1000; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=FLOWS:100m inactive=24h max_size=2g use_temp_path=off; server { listen 3200; location /api/flows-pagby kkobylyanskiy - Nginx Mailing List - English
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_convert_head W dniu 04.03.2021 o 15:03, Señor J Onion pisze: > I use nginx as a forward proxy, with content caching. > > My app first performs a HEAD request to a Google Cloud Storage object. Then it may perform a GET request to the same object. > > The HEAD request (which comes first) causes a cache MISS. Theby Grzegorz Kulewski - Nginx Mailing List - English
I use nginx as a forward proxy, with content caching. My app first performs a HEAD request to a Google Cloud Storage object. Then it may perform a GET request to the same object. The HEAD request (which comes first) causes a cache MISS. The content body length returned to the client is 0 (which is obviously correct). However, I think that the actual object is still included in the body from theby Señor J Onion - Nginx Mailing List - English
Dear Experts, I am trying to set up a simple limited caching proxy; I have got proxying to work, but I can't get it to cache. I'm a software developer, working from my home office. I have a fast home network, a slow connection to the internet, and fast cloud servers (e.g. AWS). I'd like to be able to cache content from some specific domains locally, to make it faster. This is simple http staby Phil Endecott - Nginx Mailing List - English
Hi, I'm wondering if there is a way to instruct nginx to cache or not a backend response based on trailing headers? Use-case is that backend does some heavy longer running streaming work that in some edge cases may fail midway. As the response is already streaming I need to tell nginx to not cache that response as it is failed. Status code is already sent and I don't think I can change it lby Claudiu - Nginx Mailing List - English
Oh. Ok, good to know about the default temp file and buffers. Just checked and I think the 'large' file we are downloading is 800mb. We don't have proxy_cache or proxy_store set. We do have proxy_temp_file_write_size 250m; We ended up doing a test where 9 of those large files were all on server1, and it continued to round robin requests. Is that temp_file_size essentially per connection? If so,by kenneth.s.brooks - Nginx Mailing List - English
Hello! On Wed, Dec 23, 2020 at 04:42:49PM -0500, Kenneth Brooks wrote: > We did think that perhaps it was buffering. > However, in our case, the "large" request is gigs in size, so there is no > way that it is buffering that whole thing. I think our buffers are pretty > small. > Unless there is some absolute black magic that will buffer what it can, > close the upstreby Maxim Dounin - Nginx Mailing List - English
On Thu, Nov 12, 2020 at 04:58:31AM -0500, unoobee wrote: Hi there, > My configuration looks like this: Thanks for this. It looks like you are setting "proxy_cache" to always try to read from "hdd_cache"; but you want it to sometimes write to "ssd_cache" instead. And you are reporting that it does not ever write to "ssd_cache". Is that correct? If sby Francis Daly - Nginx Mailing List - English
My configuration looks like this: proxy_cache_path /cache/ssd keys_zone=ssd_cache:10m levels=1:2 inactive=600s max_size=100m; proxy_cache_path /cache/hdd keys_zone=hdd_cache:10m levels=1:2 inactive=600s max_size=100m; upstream backend { server www.test.com:443; } server { listen 80; server_name test.com; location / {by unoobee - Nginx Mailing List - English
On Thu, Nov 12, 2020 at 02:33:49AM -0500, unoobee wrote: Hi there, > I tried using $upstream_http_content_length inside the map directive with > the "volatile" parameter to specify the proxy_cache behavior, but the map > still uses the default value. What's your config? > Is there any way to set the proxy_cache behavior depending on > $upstream_http_content_length viaby Francis Daly - Nginx Mailing List - English
I tried using $upstream_http_content_length inside the map directive with the "volatile" parameter to specify the proxy_cache behavior, but the map still uses the default value. Is there any way to set the proxy_cache behavior depending on $upstream_http_content_length via the map directive?by unoobee - Nginx Mailing List - English
Hi all, I fixed my issue with a simple application. Solution could be named "not named upstream support". idea is that there is proxy_pass to upstream, and upstream is forwarding request to my local service (upstream server 127.0.0.1:1981) Under localhost 127.0.0.1:1981 my simple application is making requests to any server, and keeps connection alive. Control of all connection is underby Łukasz Tasz - Nginx Mailing List - English
On 15.10.20 18:25, 0815@lenhardt.in wrote: > Hi! > > This is the first time I am doing rewrites with a fastcgi backend > (php-fpm). > > This is my fpm location which is working fine on a ubuntu 18.04 VM: > > # fpm-config > location ~ \.php$ { > > > include snippets/fastcgi-php.conf; >by Anonymous User - Nginx Mailing List - English
Hi! This is the first time I am doing rewrites with a fastcgi backend (php-fpm). This is my fpm location which is working fine on a ubuntu 18.04 VM: # fpm-config location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php-fpm-typo3.sock; fastcgi_param HTTPS 'on'; fastcgi_read_timeby Anonymous User - Nginx Mailing List - English
Hi, sucha setup is on 1stproxy, there I have upstream defined to second proxy and it works - connection is reused. problem is that it is chain of forward proxy with caching, and your-server.com including port is different - service:port is dynamic. I'm asking for it, because with firefox (set global proxy to my second proxy) I go to some blabla.your-server.com connection is kept in a meaning thatby Łukasz Tasz - Nginx Mailing List - English
On Thu, Oct 8, 2020 at 11:36 AM Łukasz Tasz <lukasz@tasz.eu> wrote: > Hi all, > > can I expect that proxy_pass will keep connection to remote server that is > being proxied? > > when I'm using setup client -> proxy -> server it looks to work > but when I'm using: > client -> 1stProxy_upstream -> proxy -> server > connection between 1stProxy and proby Marcin Wanat - Nginx Mailing List - English
Hi all, can I expect that proxy_pass will keep connection to remote server that is being proxied? when I'm using setup client -> proxy -> server it looks to work but when I'm using: client -> 1stProxy_upstream -> proxy -> server connection between 1stProxy and proxy is being kept thanks to keepalive 100, but proxy makes new connection every new request, very simple setup: http {by Łukasz Tasz - Nginx Mailing List - English
Thank you, Francis. That sounds like a good plan. Pardon the new thread but I was subscribed in Digest Mode and couldn't reply directly. Igal On Sun, Sep 13, 2020 at 03:42:28PM -0700, Igal Sapir wrote: Hi there, >* I have a variable that shows if a certain cookie exists in the Request, *>* e.g. $req_has_somecookie, and I want to be able to use proxy_cache only for *>* specific URIsby Igal Sapir - Nginx Mailing List - English
On Sun, Sep 13, 2020 at 03:42:28PM -0700, Igal Sapir wrote: Hi there, > I have a variable that shows if a certain cookie exists in the Request, > e.g. $req_has_somecookie, and I want to be able to use proxy_cache only for > specific URIs, e.g. /slow-page/ if the variable is 0. > > I know that "if" is evil as it creates a new location scope. > > What's the best waby Francis Daly - Nginx Mailing List - English
Hello, I have a variable that shows if a certain cookie exists in the Request, e.g. $req_has_somecookie, and I want to be able to use proxy_cache only for specific URIs, e.g. /slow-page/ if the variable is 0. I know that "if" is evil as it creates a new location scope. What's the best way to handle this? Thanks, Igal _______________________________________________ nginx mailing listby Igal Sapir - Nginx Mailing List - English
Hi, On Mon, Jul 27, 2020 at 04:42:00AM +0300, Maxim Dounin wrote: > Hello! > > On Fri, Jul 24, 2020 at 03:21:31PM +0200, Adam Volek wrote: > > > On 24. 07. 20 4:33, Maxim Dounin wrote: > > > As long as the response returned isn't cacheable (either > > > as specified in the response Cache-Control / Expires > > > headers, or per proxy_cache_valid), ngiby Roman Arutyunyan - Nginx Mailing List - English
I'm using rewrite to change some tokens in the url path, and am using ssl proxy to send traffic to a downstream server. if i post to https://myhost/start/foo/213/hello, the request gets to https://client-service-host/client/service/hello/213 using the needed certificate. great. my question is, how do i retain query string parameters in this example so that if i post(or get) using query stringsby Mark Lybarger - Nginx Mailing List - English
Hello! On Fri, Jul 24, 2020 at 03:21:31PM +0200, Adam Volek wrote: > On 24. 07. 20 4:33, Maxim Dounin wrote: > > As long as the response returned isn't cacheable (either > > as specified in the response Cache-Control / Expires > > headers, or per proxy_cache_valid), nginx won't put > > the response into cache and will continue serving previously > > cached responby Maxim Dounin - Nginx Mailing List - English
Hi, We're running into some strange behaviour with the stale-while-revalidate extension of the cache-control header when using nginx as a reverse proxy. When there is a stale response in the cache with nonzero stale-while-revalidate time, nginx attempts revalidation but seems to ignore the upstream answer if it has specific status code, such as 404 or 500, and server a stale response to theby Adam Volek - Nginx Mailing List - English
Hello! On Mon, Jun 08, 2020 at 08:57:56PM +0100, Alan Chandler wrote: > I have nginx acting as the static file server for a single page > web app I am developing. It acts as a proxy server for the > "/api" portion on my url space. > > The backend server is running on a different port on local host > and is nodejs based.. I'm using nginx as an http2 front end andby Maxim Dounin - Nginx Mailing List - English
I have nginx acting as the static file server for a single page web app I am developing. It acts as a proxy server for the "/api" portion on my url space. The backend server is running on a different port on local host and is nodejs based.. I'm using nginx as an http2 front end and using http 1/1 between nginx and the backend. In the main this is working well. But I have one probleby akc42 - Nginx Mailing List - English
Hi everyone, Just chasing up below, if anyone has any suggestions? So to recap, the situation is, I have nginx running Wordpress with the Hypercache plugin but only the homepage is cached, other pages "miss" according to page headers. Here is contents of the sites-enabled conf in question: server { listen 80; listen 443 ssl http2; serverby Jore - Nginx Mailing List - English
Hi there, Thanks for that. Could you provide an example conf by any chance please, so I can get my head around that? Thanks! Jore On 24/5/20 8:56 am, Alex Evonosky wrote: > Jore- > > I applied the proxy_hide_header for no-cache headers to NGINX can > process it and cache it. > > On Sat, May 23, 2020 at 5:17 PM Jore <community@thoughtmaybe.com > <mailto:community@thoby Jore - Nginx Mailing List - English
Jore- I applied the proxy_hide_header for no-cache headers to NGINX can process it and cache it. On Sat, May 23, 2020 at 5:17 PM Jore <community@thoughtmaybe.com> wrote: > Hi Alex/all, > > How did you fix? > > I've got a very similar issue. > > nginx running Wordpress with the Hypercache plugin but only the homepage > is cached, other pages "miss" accordiby Alex Evonosky - Nginx Mailing List - English