Hello, unfortunately no :( we have doubled server memory to 128Gigs. Sincerely Petr Holikby PetrHolik - Nginx Mailing List - English
Ok, thanks for info. I'll, do some research. I read some articles about memory allocation and I think when the system will be going to out of memory, the will try to reclaim freed pages which in normal situations when have enough ram does not because of avoiding memory fragmentation. Petr Holikby PetrHolik - Nginx Mailing List - English
Hello Maxim, thanks for reply. Is there possibility to purge allocated buffer(RAM) in old(gracefully) worker processes? IMO worker thread have allocated all memory till last clients disconnects. That is really isue for us - we have currently 32Gigs of spare RAM to be able to handle reload under load. Sincelery Petr Holikby PetrHolik - Nginx Mailing List - English
Hello we are running nginx 1.2.7 with this in conf: output_buffers 5 5m; sendfile off; That works well, BUT if I reload server configuration with nginx -s reload Memory consumptions for few hours(clients use long lived(few hours) tcp connections). Is this behavior correct? Can we avoid this? We have to had as twice much RAM to be able to restart nginx under load. Sincerely Petr Holikby PetrHolik - Nginx Mailing List - English
Hello, I was facing same issue, here is what i investigated: you can change php-fpm.conf access.format variable to something that will not exceed 1024 bytes(saddly there is nothing to limit size of particular variable like url which can solve this issue smartly). Or you can recompile PHP-fpm and define MAX_LINE_LENGTH to something that fits your need. or which i have choosed: change appliby PetrHolik - Php-fpm Mailing List - English