> > What I'd suggest instead is setup a load balancer with URI hashing > in front of it, so the cache hit ratio is as high as possible without > multiple layers caching the same object. We can also combine LB and cache nodes in one machine as explained in nginx blog and that could be very efficient and scalable . I should monitor our network for a few weeks but i think overall thisby satrobit - Nginx Mailing List - English
We currently have ~30k req/s but our network is growing very fast so i need to make sure our architecture is scalable . After some researching i've decided to go with individual nginx nodes for now . If we encounter too much request to our upstream, i'm gonna set up the multi layer architecture you mentioned probably . Thank you for your help. On Sat, Sep 23, 2017 at 2:37 PM, Lucas Rolff <luby satrobit - Nginx Mailing List - English
Sorry for the confusion . My problem is that i need to cache items as much as possible so even if one node had the storage capacity to satisfy my needs it couldn't handle all the requests and we can't afford multiple nginx nodes request to our main server each time an item is requested on a different nginx node . For that problem i have afew scenarios but they either have huge overhead on our serby satrobit - Nginx Mailing List - English
Hello, Since nginx stores some cache metadata in memory , is there any way to share a cache directory between two nginx instances ? If it can't be done what do you think is the best way to go when we need to scale the nginx caching storage ? Thanks _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby satrobit - Nginx Mailing List - English