I saw 4k, 16k, and 32k buffer sizes in the response chain, why not keep all buffers in the same size? Are these sizes of buffer relevant to the chunked HTTP transfer encoding?by hanzhai - Nginx Mailing List - English
Hi, Never mind, I figured it out by myself. The subrequest will enter the fail label which will return an NGX_ERR that caused the above-mentioned error. Thanks.by hanzhai - Nginx Mailing List - English
Hi, I also did these in my header_filter to make sure that the modified response is sent to the client with chunked encoding. ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); ngx_http_clear_etag(r); ngx_table_elt_t *header_entry = ngx_list_push(&r->headers_out.headers); if (header_entry == NULL) { return ngx_by hanzhai - Nginx Mailing List - English
Hi Maxim, Thanks for your reply. Your guide made me understand thoroughly the role of calling ngx_http_next_body_filter(r, NULL) in the gzip module which helped a lot. The buffer now can be reused but I still got one issue that confused me a lot. I got curl: (18) transfer closed with outstanding read data remaining error when I access the path the code modified. I captured packets through tcby hanzhai - Nginx Mailing List - English
Hi, I am writing my own filter module based on the gzip filter module. My filter module would first insert a long text (200 to 1024 KB based on the situation) at the beginning of the original response and then do some other manipulations to the original response. The pre-configured number of buffers that can be allocated per request (like the gzip_buffers directive in gzip filter module) will rby hanzhai - Nginx Mailing List - English