Here are the additional details: $ uname -a Linux a002 4.15.0-177-generic #186-Ubuntu SMP Thu Apr 14 20:23:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS" $ cat /etc/os-release NAME="Ubuntu" VERSION="18.04.6 LTS (Bionic Beaver)" ID=ubuntu ID_LIby roger_bc - Nginx Mailing List - English
Thanks, Sergey. We are simulating 1000 clients. Some get cache hits, and some go upstream. So there are more than 1000 connections. We have 24 workers running, each configured: events { worker_connections 1024; } We are seeing the following errors from nginx: 21151#21151: 1024 worker_connections are not enough, reusing connections 21151#21151: accept4() failed (24: Too many open files) 2115by roger_bc - Nginx Mailing List - English
Hello, my understanding is that worker_connections applies to each worker (eg. when set to 1024, 10 worker processes could handle up to 10240 connections). But we are seeing 1024 worker_connections are not enough, reusing connections from one worker while other workers are idle. Is there something we can do to balance connections more evenly across workers? This is from a performance test. Theby roger_bc - Nginx Mailing List - English
Hello, there seem to be two methods to tell nginx to re-open the log file after the file was rotated (we use logrotate). 1) nginx -s reopen 2) kill -USR1 Which is the preferred method, and why. I am asking because we have seen nginx -s reopen failing because of a transient issue with the configuration. According to the man page reopen should be the same as SIGUSR1, but the error we saw implieby roger_bc - Nginx Mailing List - English
Hello, we have observed a case where it seems that the proxy_cache_valid directive is ignored. nginx version: 1.19.9 Config: proxy_cache_valid 200 206 30d; Scenario: * A cache file was corrupted (a file system issue). A part of the section that contains the headers had been overwritten with binary data. * The resource represented by the corrupted cache file is requested. * NGINX detects the cby roger_bc - Nginx Mailing List - English
Hello, from a practical perspective, what would be considered an unreasonable large number of cache files (unique cache keys) in a single nginx server? 1M, 10M, 100M? With a large cache, would there be any significant benefit in using multiple caches (multiple key_zones) in a single nginx server? Or using two nginx servers on the same physical server (or VM)? I am aware of the ~ 8K keys (fileby roger_bc - Nginx Mailing List - English
Thanks, Maxim, for correcting my misunderstanding. With what frequency is the cache manger run? Roger > On Apr 3, 2020, at 9:26 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > > Hello! > > On Fri, Apr 03, 2020 at 08:33:43AM -0700, Roger Fischer wrote: > >>> You can just set the inactive time longer than your possible maximum expire time for the objects then tby roger_bc - Nginx Mailing List - English
> You can just set the inactive time longer than your possible maximum expire time for the objects then the cache manager won't purge the cache files even the object is still valid but not accessed. That may only have a small impact. As far as I understand: NGINX will remove an item only when the cache is full (ie. it needs space for a new item). Items are removed based on the least-recentlyby roger_bc - Nginx Mailing List - English
Hello, is there a hook into the nginx processing to modify the response body (and headers) before they are cached when using with proxy_pass? I am aware of the body filters (http://nginx.org/en/docs/dev/development_guide.html#http_body_filters <http://nginx.org/en/docs/dev/development_guide.html#http_body_filters>), running before the response is delivered to the client. But I would preferby roger_bc - Nginx Mailing List - English
Hello, is it possible to have multiple server_name directives in the same server block? I.e. is the following possible? server { listen 1.2.3.4:443 ssl; server_name *.site1.org *.site2.org; server_name ~^app1.*\.site3\.org$; …. Or do I need to create a second server block? Thanks… Roger _______________________________________________ nginx mailing list nginx@nginx.org htby roger_bc - Nginx Mailing List - English
Hello, is there a way to check if a requested resource is in the cache? For example, “if” has the option “-f”, which could be used to check if a static file is present. Is there something similar for a cached resource? Thanks… Roger _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby roger_bc - Nginx Mailing List - English
Hello, proxy_store seems to be a much simpler alternative to “cache" pseudo-static resources. But there is very little discussion of it on the Internet or nginx forum (compared to proxy_cache). Is there anything non-obvious that speaks agains the use of proxy_store? Thanks… Roger _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.oby roger_bc - Nginx Mailing List - English