Hi Lucas, The cache is pretty big and I want to limit unnecessary requests if I can. Cloudflare is in front of my machines and I pay for load balancing, firewall, Argo among others. So there is a cost per request. Admittedly I have a not so complex cache architecture. i.e. all cache machines in front of the origin and it has worked so far. This is also because I am not that great a programmeby Quintin Par - Nginx Mailing List - English
Can I ask, why do you need to start with a warm cache directly? Sure it will lower the requests to the origin, but you could implement a secondary caching layer if you wanted to (using nginx), so you’d have your primary cache in let’s say 10 locations, let's say spread across 3 continents (US, EU, Asia), then you could have a second layer that consist of a smaller amount of locations (1 instanby Lucas Rolff - Nginx Mailing List - English
Thank you for your answer. This means nginx is not compatible with CMAF and low latency streaming. I tried the slice module and read its code but it does not cover my needs. I guess I have to develop a new proxy module. thanks Traquila Roman Arutyunyan Wrote: ------------------------------------------------------- > Hi, > > On Fri, Aug 31, 2018 at 05:02:21AM -0400, traqby traquila - Nginx Mailing List - English
Hi, On Fri, Aug 31, 2018 at 05:02:21AM -0400, traquila wrote: > Hello, > I'm wondering if nginx is able to serve multiple requests from a single > proxy request before it completes. > > I am using the following configuration: > > proxy_cache_lock on; > proxy_cache_lock_timeout 5s; > proxy_cache ram; > proxy_pass myUpstream; > > My upstream usby Roman Arutyunyan - Nginx Mailing List - English
Hello, I'm wondering if nginx is able to serve multiple requests from a single proxy request before it completes. I am using the following configuration: proxy_cache_lock on; proxy_cache_lock_timeout 5s; proxy_cache ram; proxy_pass myUpstream; My upstream uses chunked transfer encoding and serve the request in 10 sec. Now if I try to send 2 requests to nginx, the firsby traquila - Nginx Mailing List - English
I'm hoping to use the limit_req directive with different rates based on a header that is returned from the auth subrequest. I got some ideas from https://www.ruby-forum.com/topic/4418040 but am running into a few problems. Here is my configuration: > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log warn; > pidby jarstewa - Nginx Mailing List - English
We currently use caching for guests, our search pages use long urls to pass the parameters to our application. Currently searches that worked for logged in users don't work for guests. I can show the issue with these two curl examples ( which are not obviously valid searches ) As a guest james_@Sophie:/mnt/c/Users/james$ curl -I "https://archiveofourown.org/works?a=1111111111111111111by James Beal via nginx - Nginx Mailing List - English
Hello! On Fri, Aug 10, 2018 at 09:05:30AM -0700, Roger Fischer wrote: > Is there a way to perform an action after a cache miss is > detected but before the request is forwarded to the upstream > server? > > Specifically, on a cache miss I want to: > Return a response instead of forwarding the request to the > upstream server. > Trigger a handler (module or script) thaby Maxim Dounin - Nginx Mailing List - English
Hello All, We have a use case. Our web application is deployed in tomcat7. At front, nginx is configured as reverse proxy and all requests are passed through nginx and are forwarded to tomcat7. Nginx serve static files directly and dynamic requests ( json ) are forwarded to tomcat7. At backend, we have MySQL db to save the application settings. What we want is when client type https://tby linsonj - Nginx Mailing List - English
Hello! On Fri, Jul 20, 2018 at 03:42:40PM -0400, jarstewa wrote: > I'm currently using the auth_request directive and caching the result based > on a guid + IP address: > > >location /auth { > > internal; > > > > proxy_pass_request_body off; > > proxy_pass $upstream_server/auth?id=$guid&requestor=$last_client_ip; > > > > proxyby Maxim Dounin - Nginx Mailing List - English
I'm currently using the auth_request directive and caching the result based on a guid + IP address: >location /auth { > internal; > > proxy_pass_request_body off; > proxy_pass $upstream_server/auth?id=$guid&requestor=$last_client_ip; > > proxy_cache auth_cache; > set $auth_cache_key "${guid}|${last_client_ip}"; > proxy_cache_kby jarstewa - Nginx Mailing List - English
Hi, I currently have an nginx configuration that uses the limit_req directive to throttle upstream content requests. Now I'm trying to add similar rate limiting for auth requests, but I haven't been able to get the auth throttle to kick in during testing (whereas the content throttle works as expected). Is there some known limitation of using limit_req against auth_request requests, or do I simpby jarstewa - Nginx Mailing List - English
Giacomo, Have a look at nginx error and access logs. Most likely, that's tomcat default timeout fires. Regards, Igor. On 04.07.2018 17:17, Giacomo Arru - BETA Technologies wrote: > > Tomcat: 9.0.8 nginx: 1.12.2 > > > I have this configuration: > > > Vaadin 8 application, served via Tomcat 9. > > > The application has manual push with websocket transport. > &gby Igor A. Ippolitov - Nginx Mailing List - English
Tomcat: 9.0.8 nginx: 1.12.2 I have this configuration: Vaadin 8 application, served via Tomcat 9. The application has manual push with websocket transport. If I use the app directly from Tomcat, -the Websocket connection works correctly. -the upload within the app of 10mb files works. If I use the application through nginx proxy, the upload works forby Giacomo Arru - BETA Technologies - Nginx Mailing List - English
Your question raises so many other questions: 1. The static content - jpg, png, tiff, etc. It looks as though you are serving them your backend and caching them. Are they also being built on demand dynamically? If not, then why csche them? Why not deploy them to nginx and serve them directly? 2. The text content - is this fragments of html that don’t have names that end in html? Sent from myby pbooth - Nginx Mailing List - English
Hello guys, I'm having a hard time defining a proxy cache because my landing page doesn't generate any HTML which can be cached. Quit complicated to explain, let me show you some logs and curl requests: curl: curl -I https://....info/de HTTP/1.1 200 OK Server: nginx Date: Thu, 21 Jun 2018 11:56:15 GMT Content-Type: text/html;charset=UTF-8 Content-Length: 135883 Connection: keep-alivby Szop - Nginx Mailing List - English
Hello! On Mon, Jun 11, 2018 at 08:53:49AM -0400, ayman wrote: > When enabling the cache on image filter; nginx workers crash and keep > getting 500. > > I'm using Nginx 1.14.0 > > error log: > 2018/06/11 12:30:49 46105#0: worker process 46705 exited on signal > 11 (core dumped) > > proxy_cache_path /opt/nginx/img-cache/resized levels=1:2 > keys_zone=resizedimby Maxim Dounin - Nginx Mailing List - English
Hi, When enabling the cache on image filter; nginx workers crash and keep getting 500. I'm using Nginx 1.14.0 error log: 2018/06/11 12:30:49 46105#0: worker process 46705 exited on signal 11 (core dumped) proxy_cache_path /opt/nginx/img-cache/resized levels=1:2 keys_zone=resizedimages:10m max_size=3G; location ~ ^/resize/(\d+)x(\d+)/(.*) { proxy_passby ayman - Nginx Mailing List - English
On Wednesday 06 June 2018 15:42:25 PGNet Dev wrote: [..] > > There is official support for cache purging with the commercial version > > of Nginx: https://www.nginx.com/products/nginx/caching/. > > Ah, so not (yet) in the FOSS product. I see it's proxy_cache, not > fastcgi_cache, based ... > Like almost all official modules, it's independent from the protocol used. hby Valentin V. Bartenev - Nginx Mailing List - English
Hi, On Wed, Jun 6, 2018 at 3:42 PM, PGNet Dev <pgnet.dev@gmail.com> wrote: > Hi > > My $0.02 coming from experience building out scalable WP clusters is, >> stick to Varnish here. >> > > Miscommunication on my part -- my aforementioned Varnish-in-front referred > to site dev in general. > > To date, it's been in front of Symfony sites. Works like a chamby Robert Paprocki - Nginx Mailing List - English
Hi > My $0.02 coming from experience building out scalable WP clusters is, > stick to Varnish here. Miscommunication on my part -- my aforementioned Varnish-in-front referred to site dev in general. To date, it's been in front of Symfony sites. Works like a champ there. Since you're apparently working with WP under real-world loads, do you perchance have a production-ready, V6-compatby PGNet Dev - Nginx Mailing List - English
Hi all, This problem is intermittent and only some of my viewers experience this. For reference here’s the screencast of hitting the url via curl: https://d.pr/v/lRE2w2 and another one from a user: https://d.pr/i/uTWsst . My website is https://www.alittlebitofspice.com/ Recently I setup reverse proxy caching and here’s the relevant code location / { proxy_pass httpby Jane Jojo - Nginx Mailing List - English
That last "# managed by Certbot" section looks wrong - it shouldn't be using "if ($host = ...", since that's inefficient and there are much better ways to do it. I have a very similar server, so here are the config files I use for it. I don't like pasting them into emails, so I made a GitHub Gist: https://gist.github.com/kohenkatz/08a74d757e0695f4ec3dc34c44ea4369 (that also meby Moshe Katz - Nginx Mailing List - English
Dear Moshe I did switch off the seafile configuration and that means that the normal chat.mydomain.com works again with nginx., I did then do > sudo certbot --nginx and the sitechat.mydomain.com now runs on with SSL. So then I switch seafile conf on again --> Seafile works as always. AND mattermost on chat.mydomain.com works, but ONLY if I add https:// in front of the webby Nginx-Chris - Nginx Mailing List - English
Thank you so much for this Peter. Very helpful. For what it’s worth, I run a static wordpress website. So the configuration should not be very complicated. The link that you provided also led me to https://github.com/perusio/wordpress-nginx https://mailtrack.io/trace/link/7155b729fa7169e53929c22c9c7a4e8e270c80ae?url=https%3A%2F%2Fgithub.com%2Fperusio%2Fwordpress-nginx&userId=74734&by Quintin Par - Nginx Mailing List - English
Looks to me like your problem is that Seafile is using HTTPS but Mattermost is not. That said, I don't understand how you are able to get to Mattermost at all, since you are setting HSTS headers that should prevent your browser from going to a non-secure page on your domain. Add HTTPS configuration for Mattermost and see if that helps. -- Moshe Katz -- kohenkatz@gmail.com -- +1(301)867-3732 Oby Moshe Katz - Nginx Mailing List - English
What happens if you only use one config file and put all of that in it? Nothing really stands out to me in your config. I run about 600 domain names through one Nginx server with many sub-domains in separate server blocks. I've had issues where a subdomain was not served correctly before. I ended up dumbing down the config to just server blocks with only access logs and a bunch of custom headeby wickedhangover - Nginx Mailing List - English
Root Server with Ubuntu 16.04. Nginx Version: 1.10.3 I have an Nginx server that serves 1 Application: An open source Cloud Server from Seafile that listens on cloud.mydomain.com I now tried to add another Application to my server: A Mattermost server that should listen on chat.mydomain.com When I am adding the Nginx config for Mattermost, then it only is available when I deactivate theby Nginx-Chris - Nginx Mailing List - English
Hello! On Mon, May 14, 2018 at 01:22:46PM -0400, vedranf wrote: > There is a problem when nginx is configured to try to follow redirects (301) > from upstream server in order to cache responses being directed to, rather > than the short redirect itself. This worked in 1.12 and earlier releases. > Here is the simplified configuration I use and which used to work: > > server { pby Maxim Dounin - Nginx Mailing List - English
Hello, There is a problem when nginx is configured to try to follow redirects (301) from upstream server in order to cache responses being directed to, rather than the short redirect itself. This worked in 1.12 and earlier releases. Here is the simplified configuration I use and which used to work: server { proxy_cache something; location / { proxy_pass http://upstream; } location @handle3by vedranf - Nginx Mailing List - English