I'm sorry to say that the patch does not make a difference. I collected Chrome's http/2 trace for a post to a non-existant url: https://gist.github.com/oschaaf/da273f96fad5e22890981fcd4a1a4376 I'm wondering about that last HTTP2_SESSION_RST_STREAM entry.. it looks like the httpv2 mod wants to cancel the incoming post data stream at an odd point in time? But I'm no expert, so I'm not sure at all.by oschaaf - Nginx Development
I'm pretty sure these logs correlate to the problems I am seeing, yes. Indeed the error.log samples are free from warnings and errors, but it seems the protocol is violated nevertheless. At least Chrome says so. So the client does report an error afaict. I'll patch in the change you posted and let you know how that goes, thanks. Otto On Wed, Apr 13, 2016 at 2:34 PM, Валентин Бартенby oschaaf - Nginx Development
Sure. - This sample contains to or three requests, the last request is where rate limiting kicks in and the protocol error happens (the earlier requests are problem-free): https://gist.github.com/oschaaf/281b7a0fed9954dd960adac55e96f2cd - This is from a post to a non-existant url: https://gist.github.com/oschaaf/6396a614ce599d5003e50bb8e7106bed Otto On Wed, Apr 13, 2016 at 12:52 AM, ВалеÐby oschaaf - Nginx Development
Hello, While looking into https://github.com/pagespeed/ngx_pagespeed/issues/1175 I noticed that when performing a POST to a non-existing page, Chrome will complain about the response (net::ERR_SPDY_PROTOCOL_ERROR). This happens with a plain nginx build, configure arguments: --prefix=/home/oschaaf/nginx-tmpbuild --with-http_v2_module --with-http_ssl_module 1.9.14 seems to have introduced some beby oschaaf - Nginx Development
On Tue, Mar 24, 2015 at 6:01 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > If there are good reasons why the termination takes so long - we > may consider adding another iteration. Otherwise - yes, it'll > remain fixed. > I have a test to see if the module shuts down properly upon receiving SIGTERM. This test starts up nginx plus a lot of synthetic load in parallel. The SIGTEby oschaaf - Nginx Development
Thanks for pointing out that the workers get signalled multiple times, I missed that indeed. In that case, termination of the module under valgrind takes a little longer then I thought it did, yet the problem remains the same. So the upper boundary of 1000 ms for the iteration has to remain fixed ? In that case, we'll have a patch to maintain (or see if we can round up in less time). Thanks! Onby oschaaf - Nginx Development
Hi, For testing quick termination during high loads, while running with valgrind, it might be useful to be able to extend the amount of time nginx allows child processes to wrap up before sending SIGKILL. For ngx_pagespeed, the current hard-coded default of 1 second seems to be just short of what we need to be able to reliably test just this scenario, so I've made a patch so we can run with diffeby oschaaf - Nginx Development
Hi, For ngx_pagespeed, I'm looking for a way to persist its module request context and restore it even after request processing has been restarted for a named location or internal redirect. Keeping a single request context during this process would allow us to avoid repeating some work we already did earlier, like cache lookups. For testing, I've achieved this by storing a pointer in the requesby oschaaf - Nginx Development