Thanks Mathew. I thought about it and even prototyped it with openresty, but I am concerned about ngx.shared.DICT.get_keys locking the whole dictionary and blocking connections that are trying to add new incoming connections. Is there some worker datastructure available that can be read and reported from? The worker obviously knows all the connections it is handling and the various states tby sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, Status module prints the count of active connections. Is there a way to fetch more details about currently running connections in nginx like request uri, started time similar to Apache's extended status. Thanks Sachinby sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, Our request flow looks like this: client --> nginx --> haproxy --> tomcat Put requests with large bodies are used to upload files. Sometimes due to some application logic, tomcat may reject an upload early and return 409, tomcat does not drain the input stream, we do not want to read the input stream and rather reject early. When tomcat rejects a PUT request early with 409by sachin.shetty@gmail.com - Nginx Mailing List - English
Thankyou, we use proxy_cache_lock as well, but in certain weird burst scenarios, it still ends up filling the disk.by sachin.shetty@gmail.com - Nginx Mailing List - English
Got it. You are right, response is returned by another block via X-Accel-Redirect. Thankyou!by sachin.shetty@gmail.com - Nginx Mailing List - English
Thankyou Maxim, is there anyway I can make the cache manager a bit more aggressive in prune and purge? We already leave 20% of space free on the disks, but the concurrent request rate for large files can be huge and we still run in to this issue. What are your thoughts about disabling buffering on such issues? This is not a fatal error, so we should stop buffering and switch to streaming modeby sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, We have a nginx fronting our object storage which caches large objects. Objects are as large as 100GB. The nginx cache max size is set to about 3.5TB. When there is a surge of large object requests and disk quickly fills up, nginx runs into out of disk space error. I was expecting the cache manager to purge items based on LRU and make room for the new elements, but that does not happen.by sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, I have a location block location ~ /get_file$ { limit_rate_after 500m; limit_rate 1m; ... ... } The limit_rate_after does not work when put inside the location block, if I move it right above the location line i.e. inside server block, it works. Any idea on how to make it work inside location and if block.by sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, The message in the logs started coming after I removed the "keepalive 60" from the upstream block. The message is connection reset by peer and not client, so i am a bit worried.by sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, I had an upstream defined in my config with keepalive 60. But the server is a legacy one and does not handle keep alive properly. So I removed the keepalive attribute and the errors I was seeing on the client from the upstream went away. But now I see a ton these info log lines: 2017/10/03 04:37:51 1933#0: *6091340 recv() failed (104: Connection reset by peer) while sending to clby sachin.shetty@gmail.com - Nginx Mailing List - English
Hi Maxim, I found one way to make this work using lua to set the cache name. It seems to be working ok, all my tests passed. """ Lua Script local resty_md5 = require "resty.md5" local str = require "resty.string" local md5 = resty_md5:new(); local posix = require("posix") local days_30 = 1000 * 60 * 60 * 24 * 30 local days_90 = days_30 * 3by sachin.shetty@gmail.com - Nginx Mailing List - English
Thanks, I guess actual proxying is the only way out. I will try it out.by sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, We want to define multiple caches based on certain request headers (time stamp) so that we can put files modified in last 10 days on SSDs, last 30 days on HDDs and so on. I understand that we could use map feature to pick a cache dynamically which is good and works for us. But when serving a file, we want to check in all the caches because file modified in last 11 days could still be onby sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, The information is not publicly available, it is protected by authentication, we have an auth plugin which makes sure auth happens before the request is routed to this cache.by sachin.shetty@gmail.com - Nginx Mailing List - English
Thanks Maxim for the reply. We have evaluated disk based encryption etc, but that does not prevent sysadmins from viewing user data which is a problem for us. Do you think we could build something using lua and intercept read and wriite call from cache?by sachin.shetty@gmail.com - Nginx Mailing List - English
Hi, We are testing using nginx as a file cache in front of our app, but the contents of the proxy cache directory are readable to any body who has access to the machine. Is there a way to encrypt the files stored in the proxy cache folder so that it' not exposed to the naked eye but nginx decrypts it on the fly before serving it to the user. Thanks Sachinby sachin.shetty@gmail.com - Nginx Mailing List - English