Welcome! Log In Create A New Profile

Advanced

Re: Implementing proxy_cache_lock when updating items

Maxim Dounin
September 02, 2015 12:20PM
Hello!

On Mon, Aug 31, 2015 at 09:16:00AM -0400, footplus wrote:

> Hello,
>
> I am currently implementing a caching proxy with many short-lived items,
> expiring at a specific date (Expires header set at an absolute time between
> 10 and 30 seconds in the future by the origin). For various reasons, my
> cache is multi-level (edge, intermediate 2, intermediate 1, origin) and
> needs to make the items expire at the edge at 2exactly the time set in the
> Expires header. When the item expires, i want an updated version of the item
> to be available at the same URL.
>
> I have been able to make it work, and I'm using proxy_cache_lock at every
> cache level to ensure i'm not hammering the origin servers nor the proxy
> server. As documented, this works perfectly for items not present in the
> cache.
>
> I am also using proxy_cache_use_stale updating to avoid this hammering also
> in the case of already in-cache items.
>
> My problems begin when an in-cache item expires. 2 top-level caches (let's
> name them E1,E2 for example) request an updated fragment from the below
> level (INT). The fragment is requested by INT from below for the first
> request, but for the second request, a stale fragment is sent (according to
> proxy_cache_use_stale setting, with the UPDATING status). So far, all is
> working according to the docs. The problem is that fragments in the UPDATING
> status are stale, and cannot be cached at all by E1,E2.., and this can be
> very impacting for INT, because all the requests made on E1/E2 are now
> proxied to INT directly, until INT has a fresh version of the item installed
> in cache (this is quite a short duration, but in testing this generate
> bursts of 15 to 50 requests in the mean time).
>
> Is there a way to implement the proxy_cache_lock to make it work also for
> expired in-cache items in the configuration ? If not, can you suggest a way
> to implement this (i'm not familiar to nginx's source, but i'm willing to
> dig into it) ?

Instead, you may consider using "proxy_cache_use_stale updating"
in combination with one of the following:

- Return some fixed short expiration time for stale responses
returned by INT. This will ensure that edge servers will cache
it for at least some time, and won't try to request new versions
over and over.

I should be possible to do so with the "expires" directive and a
map{} from $upstream_cache_status, e.g.:

map $upstream_cache_status $expires {
default "";
STALE "10s";
}

expires $expires;

Alternatively, if your edge servers use nginx, you can do

map $upstream_cache_status $expires {
default "";
STALE "10";
}

add_header X-Accel-Expires $expires;

and it will apply to edge servers only, and won't try to change
any other response headers.

- Ensure that the version cached by INT will be considered stale
before it will become uncacheable according to HTTP headers.
This can be done either with proxy_ignore_headers +
proxy_cache_valid, or using the X-Accel-Expires header returned
by backends.


--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Implementing proxy_cache_lock when updating items

footplus August 31, 2015 09:16AM

Re: Implementing proxy_cache_lock when updating items

Maxim Dounin September 02, 2015 12:20PM

Re: Implementing proxy_cache_lock when updating items

footplus September 03, 2015 03:38AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 303
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready