I tested now the second patch as well. There are no more socket leaks with both fixes. Thanks!by martinproinity - Nginx Mailing List - English
Thanks a lot for the detailed explanation. The first patch reduced the socket leaks by >99%. I will run tests with second patch beginning next week and check if it goes down to 0 leaks. Are does 2 fixes integrated in the next release 1.11.2? Thanks!by martinproinity - Nginx Mailing List - English
Wow, that was fast, thanks :) I'm doing tests now with the h2 fix. It looks promising so far. I will continue to run various test to see if there are scenarios that trigger an open socket leak. - At which point in the debug log did you know there is something going wrong? - Is there an explanation what went wrong and what the patch is fixing? - Are there other known cases that can cause oby martinproinity - Nginx Mailing List - English
Here you go: https://tempfile.me/dl/ES9tWLqjnFozdx/ Thanks!by martinproinity - Nginx Mailing List - English
Here more of the debug log. I had to shorten it as I did get this message "Please shorten your messages, the body is too large". Thanks! 2016/06/19 19:53:09 8724#0: *19047 accept: <removed>:54691 fd:236 2016/06/19 19:53:09 8724#0: *19047 event timer add: 236: 60000:1466358849721 2016/06/19 19:53:09 8724#0: *19047 reusable connection: 1 2016/06/19 19:53:09 8724#0: *190by martinproinity - Nginx Mailing List - English
I already did. Is there something specific I should look after in that debug log? Here a little extract (note: I replaced the request with <removed>): ... 2016/06/19 19:53:11 8724#0: *19047 event timer: 63, old: 1466358851127, new: 1466358851267 2016/06/19 19:53:11 8724#0: *19047 http upstream exit: 0000000000000000 2016/06/19 19:53:11 8724#0: *19047 finalize http upstream requesby martinproinity - Nginx Mailing List - English
The dump was 550GB large. So I guess the only explanation for this are the accumulated keys_zone sizes. There are no third-party modules. We only see the leaks for specific HTTP/2 traffic at every reload.by martinproinity - Nginx Mailing List - English
Thanks, setting the value to 600G made it possible to get a dump. But it took ages and the system became quite unstable. What can cause the dump to become that large? There is almost no traffic (<10Mbps) on this server with 32G memory.by martinproinity - Nginx Mailing List - English
nginx version: nginx/1.11.1 built with OpenSSL 1.0.2h 3 May 2016 I try to debug those alerts currently, which only appear after a reload: 2016/06/17 13:10:49 14624#0: *15709 open socket #626 left in connection 628 I compiled nginx with --with-debug and set the flags CFLAGS="-g -O0" ./configure... The following core dump settings are defined: debug_points abort; workingby martinproinity - Nginx Mailing List - English
Our nginx reverse proxy creates a temporary entry in the proxy_temp directory if that file does not yet exists in the cache. So far so good but if the file does not exists and the file will be requested 10 times at the same time it creates 10 temporary files in the proxy_temp and fetches the data 10 times from the proxied server. The result is high write IO and high bandwidth for a single file.by martinproinity - Nginx Mailing List - English
Thanks Martin! That works.by martinproinity - Nginx Mailing List - English
Thanks Maxim. Is it possible the filter on a value "larger than" or "smaller than"? How would the regex in the map block look like? e.g. smaller than 1000000? I tried something like this, which is not working: map $upstream_http_content_length $docache { default 0; "~*([1-9][0-9]{0,6}|1000)$" 1; }by martinproinity - Nginx Mailing List - English
would be interesting to know the answer to this questions as I was wondering as well if that is possible. Thanks for the response!by martinproinity - Nginx Mailing List - English
![]() |
![]() |
![]() |
![]() |
![]() |