Welcome! Log In Create A New Profile

Advanced

Re: Memory Management ( > 25GB memory usage)

Jonathan Vanasco
June 03, 2013 11:18AM
On Jun 3, 2013, at 10:13 AM, Belly wrote:

>>> What is the best setting for my situation?
>>
>> I would recommend using "fastcgi_max_temp_file_size 0;" if you
>> want to disable disk buffering (see [1]), and configuring some
>> reasonable number of reasonably sized fastcgi_buffers. I would
>> recommend starting tuning with something like 32 x 64k buffers.
>>
>> [1] http://nginx.org/r/fastcgi_max_temp_file_size
>>
>
> I read about fastcgi_max_temp_file_size, but I'm a bit afraid of.
> fastcgi_max_temp_file_size 0; states that data will be transfered
> synchronously. What does it mean exactly? Is it faster/better than disk
> buffering? Nginx is built in an asynchronous way. What happens if a worker
> will do a synchronous job inside an asynchronous one? Will it block the
> event loop?


It's always been my understanding that in this context, "synchronously" means that nginx is proxying the data from php/fcgi to the client in real time.

This sounds like a typical problem of application load balancing.

The disk buffering / temp files allows for nginx to immediately "slurp" the entire response from the backend process, and then serves the files to the downstream client. This has the advantage of allowing you to immediately re-use the fcgi process for dynamic content – slow or hangup connections downstream won't tie up your pool of fcgi/apache processes.

restated with blocking - the temp files allow for blocking within nginx instead of php ( nginx can handle 10k connections, php is limited to the number of processes ). by removing the tempfiles, blocking will happen within php instead.

my advice would be to use URL partitioning to segment this type of behavior. I would only allow specific URLs to have no tmp files , and I would proxy them back to a different pool of fcgi (or apache) servers running with a tweaked config. this would allow the blocking activity from the routes serving large files to not affect the "global" pool of php processes.

i would also look into periodic reloads of nginx, to see if that frees things up. if so, that might be a simpler/more elegant solution.

I encountered problems like this about 10years ago with mod_perl under apache. The aggressive code optimizations and memory/process management were tailored to making the application work very well – but did not play nice with the rest of the box. The fix was to keep a low number of max_requests , and move to a "vanilla + mod_perl apache" system. Years later, nginx became the vanilla apache.

similar issues like this happen to people in the python and ruby communities as well – more expensive or intensive routes are often sectioned off and dispatched to a different pool of servers , so their workload doesn't affect the rest of requests.











_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Memory Management ( > 25GB memory usage)

Belly June 03, 2013 08:57AM

Re: Memory Management ( > 25GB memory usage)

shahzaib1232 June 03, 2013 09:06AM

Re: Memory Management ( > 25GB memory usage)

Belly June 03, 2013 09:13AM

Re: Memory Management ( > 25GB memory usage)

Maxim Dounin June 03, 2013 09:52AM

Re: Memory Management ( > 25GB memory usage)

Belly June 03, 2013 10:13AM

Re: Memory Management ( > 25GB memory usage)

Maxim Dounin June 03, 2013 11:14AM

Re: Memory Management ( > 25GB memory usage)

Jonathan Vanasco June 03, 2013 11:18AM

Re: Memory Management ( > 25GB memory usage)

Belly June 04, 2013 03:48AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 164
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 500 on July 15, 2024
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready