Hi Hucc, No, that did not work. Are you missing something from your patch? I implemented your patch, ran two workers and set rtmp_auto_push on. The results were actually worse: my server did not send data to my one connected test client. Carey Gister 415-310-5304 ________________________________ From: nginx-devel <nginx-devel-bounces@nginx.org> on behalf of 胡聪 (hucc) <by careygister - Nginx Development
Bump. Can anyone explain what changed that had this feature stop working? Thanks, Carey Gister 415-310-5304 ________________________________ From: Carey Gister <careygister@outlook.com> Sent: Friday, July 26, 2019 11:08 To: 'nginx-devel@nginx.org' <nginx-devel@nginx.org> Subject: RTMP with multiple worker processes and rtmp_auto_push Hi, I hope someone can help illuminate some bby careygister - Nginx Development
Hi, I hope someone can help illuminate some background information related to this module. I understand that the rtmp_auto_push directive worked with multiple workers through nginx 1.7.x and then something changed in the nginx internals and it stopped working and was no longer supported for multiple workers. Can someone provide background on what changed that had this feature stop working? My maby careygister - Nginx Development
Hi Maxim, Thank you. By "production version" I meant the current release version, which if I recall correctly, is 1.17.1. Ok. So then you recommend I copy the code that processes the If-Range header to determine if the If-Range is valid? I can then use those functions to determine the extent of the range, which is what I need to know. Carey Gister 415-310-5304 ______________________by careygister - Nginx Development
Hi Maxim, Thank you for your reply. My use case is as follows: After the slice header filter calls ngx_http_next_header_filter the contents of the request headers_out fields will be modified if an If-Range header is valid. The production version of the slice header filter already relies on the modified headers_out fields set by the range filter. My extension needs to know the bounds of the neby careygister - Nginx Development
The ngx_http_slice_parse_content_range function assumes that the parsed buffer is null terminated. Since the buffer is an ngx_str_t, that assumption is false. If, by chance, the buffer is null terminated it is simply a matter of luck, and not design. In particular, if the headers_out.content_range ngx_str_t was allocated in the ngx_http_range_filter_module then the buffer was allocated as a non-zby careygister - Nginx Development
Hi Maxim, Thanks for your reply. I tried with and without quotes. The same result. My module is inserted in front of the copy filter module. I am writing a drop in replacement for the slice module. I want my module to be in front of the stream module. I build without specifying --with-http_slice_module. Here is my config file: config ngx_module_type=HTTP_AUX_FILTER ngx_module_name=ngx_http_myby careygister - Nginx Development
Hello, I'm writing a new module and I want to place in a specific order in the module list as defined in ngx_modules.c. If my module name is x_module and I want it to run after ngx_http_slice_filter_module I tried: ngx_module_order=x_module ngx_http_slice_filter_module and I am informed during configuration that 'ngx_http_slice_filter_module does not exist. What is the correct syntax for thisby careygister - Nginx Development
I am using the slice module to request data from an upstream server. My clients are connecting over SSL. With a non-SSL connection, nginx reads the slices from the upstream server as quickly as it can deliver them. Over SSL connections, slices are read incrementally only as quickly as the client can consume them. Can anyone explain this behavior and tell me how to change it so that the SSL clieby careygister - Nginx Mailing List - English
Hi, I noticed that if I configure a server to use the slice module it will happily read slices of the appropriate size as fast as the upstream server can deliver them. However, if I configure the same server to run over SSL it requests the slices at a rate consonant with the speed with which the my client application is consuming the data -- in my case about 4MB per second. This behavior can beby careygister - Nginx Development
Hi, I have an interesting requirement. I am reading large files (on the order of 1 GB+) for connected clients. The clients may view part of the data and then cancel it, so I don't want to issue a request for an return the entire document. What I am interested in doing is fetching part pf the document, say 6 MB, and returning it. Then, I want to pre-fetch another 6 MB slice and hold it until/if thby careygister - Nginx Development