Welcome! Log In Create A New Profile

Advanced

Re: Scaling nginx caching storage

September 23, 2017 03:52PM
We currently have ~30k req/s but our network is growing very fast so i need
to make sure our architecture is scalable .

After some researching i've decided to go with individual nginx nodes for
now . If we encounter too much request to our upstream, i'm gonna set up
the multi layer architecture you mentioned probably .

Thank you for your help.

On Sat, Sep 23, 2017 at 2:37 PM, Lucas Rolff <lucas@lucasrolff.com> wrote:

> > if one node had the storage capacity to satisfy my needs it couldn't
> handle all the requests
>
>
>
> What amount of requests / traffic are we talking about, and which kind of
> hardware do you use?
> You can make nginx serve 20+ gigabit of traffic from a single machine if
> the content is right, or 50k+ req/s
>
>
>
> > But at this point i'm beginning to think if it's even worth it . Should
> i settle for having multiple nginx nodes requesting the same item to our
> upstream server ?
>
>
>
> If you’re offloading 99.xx% of the content to nginx anyway, a few extra
> requests to the upstream shouldn’t really matter much.
>
> You could even have multiple layers of nginx to lower the amount of
> upstream connections going to the server – so on your let’s say 10 nginx
> instances, you could use 1-2 nginx instances as upstream, and on those 1-2
> nginx instances use the actual upstream.
>
>
>
> Generally speaking you’ll have downsides with sharing storage or cache
> between multiple servers, and it just adds a lot of complexity to minimize
> the cost and then it might turn out you actually do not save anything
> anyway.
>
>
>
> Best Regards,
>
> Lucas
>
>
>
> *From: *nginx <nginx-bounces@nginx.org> on behalf of Amir Keshavarz <
> amirkekh@gmail.com>
> *Reply-To: *"nginx@nginx.org" <nginx@nginx.org>
> *Date: *Saturday, 23 September 2017 at 11.48
> *To: *"nginx@nginx.org" <nginx@nginx.org>
> *Subject: *Re: Scaling nginx caching storage
>
>
>
> Sorry for the confusion .
>
> My problem is that i need to cache items as much as possible so even if
> one node had the storage capacity to satisfy my needs it couldn't handle
> all the requests and we can't afford multiple nginx nodes request to our
> main server each time an item is requested on a different nginx node .
>
>
>
> For that problem i have afew scenarios but they either have huge overhead
> on our servers and our network or are not suitable for sensitive
> production env because it causes weird problems ( sharing storage ) .
>
>
>
> But at this point i'm beginning to think if it's even worth it . Should i
> settle for having multiple nginx nodes requesting the same item to our
> upstream server ?
>
>
>
>
>
> On Sat, Sep 23, 2017 at 1:48 PM, Lucas Rolff <lucas@lucasrolff.com> wrote:
>
> > is there any way to share a cache directory between two nginx instances ?
>
> > If it can't be done what do you think is the best way to go when we need
> to scale the nginx caching storage ?
>
>
>
> One is about using same storage for two nginx instances, the other one is
> scaling the nginx cache storage.
>
> I believe it’s two different things.
>
>
>
> There’s nothing that prevents you from having two nginx instances reading
> from the same cache storage – however you will get into scenarios where if
> you try to write from both machines (Let’s say it tries to cache the same
> file on both nginx instances), you might have some issues.
>
>
>
> Why exactly would you need two instances to share the same storage?
>
> And what scale do you mean by scaling the nginx caching storage?
>
>
>
> Currently there’s really only a limit to your disk size and the size of
> your keys_zone – if you have 50 terabytes of storage, just set the
> keys_zone size to be big enough to contain the amount of files you wanna
> manage (you can store about 8000 files per 1 megabyte).
>
>
>
>
>
>
>
> *From: *nginx <nginx-bounces@nginx.org> on behalf of Amir Keshavarz <
> amirkekh@gmail.com>
> *Reply-To: *"nginx@nginx.org" <nginx@nginx.org>
> *Date: *Saturday, 23 September 2017 at 10.58
> *To: *"nginx@nginx.org" <nginx@nginx.org>
> *Subject: *Scaling nginx caching storage
>
>
>
> Hello,
>
> Since nginx stores some cache metadata in memory , is there any way to
> share a cache directory between two nginx instances ?
>
>
>
> If it can't be done what do you think is the best way to go when we need
> to scale the nginx caching storage ?
>
>
>
> Thanks
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Scaling nginx caching storage

satrobit September 23, 2017 05:00AM

Re: Scaling nginx caching storage

Lucas Rolff September 23, 2017 05:20AM

Re: Scaling nginx caching storage

satrobit September 23, 2017 05:50AM

Re: Scaling nginx caching storage

Lucas Rolff September 23, 2017 06:08AM

Re: Scaling nginx caching storage

satrobit September 23, 2017 03:52PM

AW: Scaling nginx caching storage

Lukas Tribus September 24, 2017 10:36AM

Re: Scaling nginx caching storage

satrobit September 24, 2017 11:16AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 245
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready