Hey, Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, May 14, 2018 at 01:22:46PM -0400, vedranf wrote: > > > There is a problem when nginx is configured to try to follow > redirects (301) > > from upstream server in order to cache responses being directed to, > rather > > than the short redirect itby vedranf - Nginx Mailing List - English
Hello, There is a problem when nginx is configured to try to follow redirects (301) from upstream server in order to cache responses being directed to, rather than the short redirect itself. This worked in 1.12 and earlier releases. Here is the simplified configuration I use and which used to work: server { proxy_cache something; location / { proxy_pass http://upstream; } location @handle3by vedranf - Nginx Mailing List - English
Hello, I've recently upgraded one of the nginx servers within a caching (proxy_cache module) cluster from 1.8.1 to 1.10 and soon after I noticed unusually high number of various errors only on that server which I eventually pin pointed to a mismatch between the actual cached file size on disk and size reported in file metadata (either content-length or something else). Apparently, cached filesby vedranf - Nginx Mailing List - English
Maxim Dounin Wrote: ------------------------------------------------------- > Hello! Hello and thanks for the reply! > > I assume the mentioned error is due to relatively often nginx > restarts and > > is benign. There's nothing else in the error log (except for > occasional > > upstream timeouts). I'm aware this likely isn't enough info to debug > theby vedranf - Nginx Mailing List - English
Hello, I'm having an issue where nginx (1.8) cache manager suddenly just stops deleting content thus the disk soon ends up being full until I restart it by hand. After it is restarted, it works normally for a couple of days, but then it happens again. Cache has some 30-40k files, nothing huge. Relevant config lines are: proxy_cache_path /home/cache/ levels=2:2 keys_zone=cache:25m inacby vedranf - Nginx Mailing List - English
Hello, I'm using proxy_cache module and I noticed nginx replies with whole response and 200 OK status on requests such as this and for content that is already in cache: User-Agent: curl/7.26.0 Accept: */* Range:bytes=128648358-507448924 If-Range: Thu, 26 Nov 2015 13:48:46 GMT However, If I remove the "If-Range" request header, I get the correct content range in return. I enaby vedranf - Nginx Mailing List - English
Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Friday 24 July 2015 09:55:04 vedranf wrote: > > Valentin V. Bartenev Wrote: > > ------------------------------------------------------- > > > On Thursday 23 July 2015 14:51:58 vedranf wrote: > > > > Valentin V. Bartenev Wrote: > > > > > > > >by vedranf - Nginx Mailing List - English
Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Thursday 23 July 2015 14:51:58 vedranf wrote: > > Valentin V. Bartenev Wrote: > > > > > It more looks like a bug in cephfs. writev() should never return > > > ERESTARTSYS. > > > > > > I've talked to the ceph people, they say ERESTARTSYS shows up iby vedranf - Nginx Mailing List - English
Hello, So Ceph devs final reply was: "ngx_write_fd() is just a write(), which, when interrupted by SIGALRM, fails with EINTR because SA_RESTART is not set. We can try digging further, but I think nginx should retry in this case." Let me know what do you think. Thanks, Vedranby vedranf - Nginx Mailing List - English
Valentin V. Bartenev Wrote: > It more looks like a bug in cephfs. writev() should never return > ERESTARTSYS. I've talked to the ceph people, they say ERESTARTSYS shows up in strace output but it is handled by the kernel and that writev(2) is interrupted by the SIGALRM, which actually appears in the strace output just after writev fails. I also failed to get this error by doingby vedranf - Nginx Mailing List - English
![]() |
![]() |
![]() |
![]() |
![]() |