Thanks for your reply.
I'm currently using this mechanism (small variation of yours) to work around the limitation. On every intermediate cache server, i'm using the following (whole chain is using nginx).
map $upstream_cache_status $accel_expires_from_upstream_cache_status {
default "";
STALE 1;
UPDATING 1;
}
more_set_headers "X-Accel-Expires: $accel_expires_from_upstream_cache_status";
(the key is using UPDATING also, because proxy_cache_use_stale is set to updating only on our setup).
So far I think we can manage to use this work-around for our setup, but it has the drawback of potentially serving slightly out of date content.
In our case, said items are ~1k files, with a TTL of 2~10s, and they MUST be fresh for our apps to work correctly. We're considering using an artificial "freshener" on intermediary caches, but i fear we can't very efficiently do this in our case, due to the nature of these files (HLS video playlists, locations and bitrates changing upon business decisions). Also, it would not be very practical to refresh hundreds of files with 2s TTL at 1s intervals on 3 cache layers, if as we expect a good part of them are not always asked by upstream.
Thanks for the suggestion.
I'm still looking for a way to hard lock the updating items however :)
Best regards,
Aurélien