Pretty cool. I'm still reading up on each but can this also be done for https termination? Is the SSL pre-read limitation the main issue there? On Thu, 1 Jun 2023, 9:31 pm Stephen Farrell, <stephen.farrell@cs.tcd.ie> wrote: > > Hi all, > > I've been working on implementing TLS encrypted client hello > (ECH, [1]) in the OpenSSL library (current branch at [2]). > Apologieby splitice - Nginx Development
Thank you, I understand and it makes sense. Do you have any advice for how third party modules could acheive the same? The blocked field might be safe to use within other phases but it's difficult to fully verify. I'm no longer using that module (having found an alternative) but I'm willing to push an issue to them in the spirit of oss. Perhaps a comment in the nginx developer docs regardingby splitice - Nginx Development
Hi, I've been going through the threadpool code for native modules in an attempt to fix a third party module with what appears to be a use-after free error looking for inspiration. I thought I would see a strategy to prevent thread pool tasks that are in the queue for processing being freed when the request / connection their memory is allocated from is cleared but I'm not. For example there doby splitice - Nginx Development
But the way have you benchmarked this? On Sun, 5 Mar 2023, 11:55 am Nick Bogdanov, <nickrbogdanov@gmail.com> wrote: > # HG changeset patch > # User Nick Bogdanov <nickrbogdanov@gmail.com> > # Date 1677975659 28800 > # Sat Mar 04 16:20:59 2023 -0800 > # Node ID 8cb34ae16de2408cbe91832194baac6ae299f251 > # Parent cffaf3f2eec8fd33605c2a37814f5ffc30371989 > Addby splitice - Nginx Development
I really like the making safe of the error log as opposed to truncation. The more information logged in cases like this the better. Alternatively what about something that indicates further data was truncated? On Wed, 28 Sep 2022, 21:07 Dipl. Ing. Sergey Brester via nginx-devel, < nginx-devel@nginx.org> wrote: > Sure, this was also my first intention. Just after all I thought the wholeby splitice - Nginx Development
By the way have you seen sregex ? Given its built with many of the same principles as nginx and PREG(1/2) compatible maybe it might be of interest? And Merry Christmas for those who celebrate. On Sat, 25 Dec 2021 at 09:11, Maxim Dounin <mdounin@mdounin.ru> wrote: > > details: https://hg.nginx.org/nginx/rev/fbbb5ce52995 > branches: > changeset: 7982:fbbb5ce52995 > user:by splitice - Nginx Development
If there are performance regressions perhaps these could be documented in the events documentation. Something along the lines of a recommended minimum kernel. On Thu, 26 Aug 2021 at 11:48, Zhao, Ping <ping.zhao@intel.com> wrote: > > Hi Maxim, > > It's been long time and I lost the mail thread. Is it now the good time to return to io_uring? I saw kernel group made many progress oby splitice - Nginx Development
I'm just a user of nginx making a comment. Simple patch, valuable find, potentially far reaching annoyance. On Thu, 8 Jul 2021, 7:33 pm Jérémie Drouet, <jeremie.drouet@gmail..com> wrote: > Ok, so what should I do now? Does it mean it cannot be done? > > On Thu, Jul 8, 2021 at 11:31 AM Mathew Heard <mat999@gmail.com> wrote: > >> This should be a major release patchby splitice - Nginx Development
This should be a major release patch. It's breaking for everyone passing the error log. On Thu, 8 Jul 2021, 7:17 pm Jeremie Drouet, <jeremie.drouet@gmail.com> wrote: > # HG changeset patch > # User Jeremie Drouet <jeremie.drouet@gmail.com> > # Date 1625150632 -7200 > # Thu Jul 01 16:43:52 2021 +0200 > # Node ID 7db380334d2ca671b98ab7563bab9ddee501c573 > # Parenby splitice - Nginx Development
Kevin, BoringSSL is already for the most part supported (in code, if not officially) if I am not mistaken On Thu, 11 Feb 2021 at 12:02, Kevin Burke <kevin@meter.com> wrote: > Hi, > There has been a recent push by some members of the security community to > try to make more critical code run in memory safe languages, because of the > high prevalence of security issues related tby splitice - Nginx Development
SoYun, Interesting patchset. Have you by chance also tested proxy_pass / fastcgi_pass performance? I'd be interested to know if the significant performance improvement was due to filesystem interaction or socket. Regards, Mathew On Tue, 24 Nov 2020 at 19:43, SoYun Seong <thdbsdox12@gmail.com> wrote: > # HG changeset patch > # User SoYun Seong <thdbsdox12@gmail.com> > # Dby splitice - Nginx Development
Hi All, If anyone else is searching for a better solution to this bug (perhaps in Apache) the following nginx patch works for me https://github.com/splitice/nginx/commit/a91fdb43793f006bda06d980a89fd1dfb428ebee Tested on 3 different ios devices and an Apache h2 backend. _______________________________________________ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailmanby splitice - Nginx Development
Hi all, I'm the maintainer of an open source module ngx_brunzip_module ( https://github.com/splitice/ngx_brunzip_module/ https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c). Effectively the same as the gunzip module (and based off that source) but with Brotli. I've been scratching my head for 2 days regarding some high CPU usage within the chain code. Itby splitice - Nginx Mailing List - English
Could anyone help me out with the problem here? ngx_module_t ngx_http_slow_module = { NGX_MODULE_V1, &ngx_http_slow_module_ctx, /* module context */ ngx_http_slow_commands, /* module directives */ NGX_HTTP_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ ngx_http_slow_init_worker, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ NULL, /* exit processby splitice - Nginx Mailing List - English
Heres probably the best confirmation I could find. Development has also started on support for QUIC https://en.wikipedia.org/wiki/QUIC and HTTP/3 https://quicwg.org/base-drafts/draft-ietf-quic-http.html – the next significant update to the transport protocols that will deliver websites, applications, and APIs. This is a significant undertaking, but likely to arrive during the NGINX 1.17 developby splitice - Nginx Mailing List - English
It is nice to see that confirmation :) On Fri, May 31, 2019 at 4:54 PM George <nginx-forum@forum.nginx.org> wrote: > Roadmap suggests it is in Nginx 1.17 mainline QUIC = HTTP/3 > https://trac.nginx.org/nginx/roadmap :) > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,256352,284367#msg-284367 > > _______________________________________________ > nginxby splitice - Nginx Mailing List - English
Hey nginx team, As I understand it QUIC support is road mapped for this year? Any chance of some confirmation, or any information that can be made available? Regards, Mathew On Fri, Jan 30, 2015 at 6:28 PM jtan <admin@grails.asia> wrote: > This would be interesting. But I guess we would need to wait. > > On Fri, Jan 30, 2015 at 2:35 PM, justink101 <nginx-forum@nginx.us> wby splitice - Nginx Mailing List - English
Maxim, Which patches / modules would you consider highly questionable? On Sat, May 4, 2019 at 10:15 AM Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Sat, May 04, 2019 at 09:02:20AM +1000, Mathew Heard wrote: > > [...] > > > It is a reduced version (less additional modules) of Openresty so third > > party module interference is possible. Would thereby splitice - Nginx Development
Not spinning since then, but that's when that worker (from the old binary) was spawned. It's an old worker spinning. Unfortunately there isnt any debug symbols. GDB: (gdb) bt #0 0x00007ff842edd016 in ?? () #1 0x0000000040d9ab70 in ?? () #2 0x4096580000000000 in ?? () #3 0x4064000000000000 in ?? () #4 0x00000009413dcda0 in ?? () #5 0x41f975d000000006 in ?? () #6 0x000000004190c148 in ?? ()by splitice - Nginx Development
Got a little bit further and confirmed this is definitely to do with the binary upgrade. www-data 988 99.9 0.7 365124 122784 ? R Jan30 131740:46 nginx: worker process root 2800 0.0 1.0 361828 165044 ? Ss Jan05 27:54 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; 2800 is nginx.old, also (nginx/1.15.8) as we did 2 builds with slightly differentby splitice - Nginx Development
Yesterday one of my techs reported that a production server had a nginx worker sitting at 99.9% CPU usage in top and not accepting new connections (but getting it's share distributed due to SO_REUSEPORT). I thought this might be related. The workers age was significantly older than it's peers so it appears to have been a worker left from a previous configuration reload. It was child to the singleby splitice - Nginx Development
>> If you've seen a >> percentage of connections being dropped for some time - likely >> there is another problem elsewhere. That's definitely what I observed. It was around 50% of this customers connections and strace on all workers (including the shutting down worker) did not show the missed connections at the accept level (grep on unique testing IP). The only thing strange Iby splitice - Nginx Development
No I did not change the number of workers, or anything core. The configuration change would have been related to a specific server block (add/remove/update) as carried out by our tooling. On Sat, Feb 2, 2019 at 1:04 AM Valentin V. Bartenev <vbart@nginx.com> wrote: > On Friday 01 February 2019 11:04:50 Mathew Heard wrote: > > Hi All, > > > > Hit a rather strange issueby splitice - Nginx Development
Hi All, Hit a rather strange issue today on a production service where during a configuration reload (evident by the worker processes in the process of being shutdown). During this reload a percentage of connections were not getting accepted (and hence not processed). I was able to confirm that none of the processes were accepting the connections. Our configuration includes the reuseport optionby splitice - Nginx Development
We are loading around 10,000 - 15,000 server_names per server. We also have a fair number of SSL certificates and at-least one big geo map as well which probably do contribute. At around 2,000 - 3,000 we hit our first issues with server_name and had to alter the hash table max. Which brought the loading speed back up (which has slowly regressed as we got bigger) Honestly I don't consider it unacby splitice - Nginx Development
If this actually yields a decrease in start time while not introducing other effects we would use it. Our start time of a couple minutes is annoying at times. On Fri, Jun 2, 2017 at 3:57 AM, Andrew Borodin <borodin@octonica.com> wrote: > 2017-06-01 22:39 GMT+05:00 Maxim Dounin <mdounin@mdounin.ru>: > > Thanks, though suggested change will certainly modify current > > nby splitice - Nginx Development
I have also tried: InheritableCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SETGID CAP_SETUID CAP_SYS_RESOURCE and various other options without avail. ---------- Forwarded message ---------- From: Mathew Heard <mat999@gmail.com> Date: Wed, Oct 12, 2016 at 9:01 PM Subject: CAP_NET_ADMIN To: nginx@nginx.org Hi All, I am stuck trying to get my nginx service which is launched viaby splitice - Nginx Mailing List - English
Hi All, I am stuck trying to get my nginx service which is launched via SystemD to give CAP_NET_ADMIN to its workers (required for IP_TRANSPARENT). I have tried /etc/security/capability.conf & setcap. SystemD has the permission whitelisted: CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN Cby splitice - Nginx Mailing List - English
I am sure there are many different uses for a variable like this, and many different properties desired and available. If we switch to using this variable down the line, I would probably combine it with something that increments for each request as thats a property that aids in tracing issues. On Wed, Apr 27, 2016 at 6:50 PM, ToSHiC <toshic.toshic@gmail.com> wrote: > Random data stringby splitice - Nginx Development
Hi, We have been using something like this for ~2 years. For ours we used a random number to start and the Process ID & Process start time to try and increase uniqueness between reloads (ours is a 128bit ID). Then applying an increment, with future requests having a higher id. Perhaps that would be better than just 128bit of random data? On Wed, Apr 27, 2016 at 12:14 PM, Alexey Ivanov <by splitice - Nginx Development