Hi, I was looking into using proxy_cache_lock mechanism to collapse upstream requests and reduce traffic. It works great right out of the box but one issue I found was that, if there are n client requests proxy_cache_locked, only one of those clients get the response as soon as the upstream sends the response to Nginx, the rest of n-1 clients wait till the response is fully flushed to the cacby loopback_proxy - Nginx Mailing List - English
I am wondering if Nginx will ever support caching without buffering responses? Buffering the full response before sending the data out to client increases the first byte latency (aka TTFB). In a perfect world if nginx can stream the data to the cache file and to the client simultaneously that would solve the TTFB issues. From experience i know that squid follows this methodology. I am curious whyby loopback_proxy - Nginx Mailing List - English
Ahhh interesting, that did the trick. Thank you so much. I have been also trying to understand the internals of nginx caching and how it works. I read the nginx blog about the overall architecture and the nginx man page about proxy_cache_* directives. I am looking for the internal architecture of the how the caching subsystem works. If you guys have any documentation or article about it, thatby loopback_proxy - Nginx Mailing List - English
You could just do proxy_pass http://192.168.10.34$request_uri See this for more https://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_uriby loopback_proxy - Nginx Mailing List - English
I am new to nginx caching but have worked with nginx a lot. I tried enabling caching feature in our repository but it never worked so I thought I will pull a fresh copy of nginx and turn it on. I ended with the same issue. For some reason, nginx is not able to create the cache file in the cache dir. I have already turned on proxy buffering and set full rw permission for all users on the cache dirby loopback_proxy - Nginx Mailing List - English