On Sun, Jan 13, 2013 at 08:22:17PM +0800, ?????? wrote:
> This patch should work between nginx-1.2.6 and nginx-1.3.8.
> The documentation is here:
> ## client_body_postpone_sending ##
> Syntax: **client_body_postpone_sending** `size`
> Default: 64k
> Context: `http, server, location`
> If you specify the `proxy_request_buffering` or
> `fastcgi_request_buffering` to be off, Nginx will send the body to backend
> when it receives more than `size` data or the whole request body has been
> received. It could save the connection and reduce the IO number with
> backend.
>
> ## proxy_request_buffering ##
> Syntax: **proxy_request_buffering** `on | off`
> Default: `on`
> Context: `http, server, location`
> Specify the request body will be buffered to the disk or not. If it's off,
> the request body will be stored in memory and sent to backend after Nginx
> receives more than `client_body_postpone_sending` data. It could save the
> disk IO with large request body.
>
>
> Note that, if you specify it to be off, the nginx retry mechanism
> with unsuccessful response will be broken after you sent part of the
> request to backend. It will just return 500 when it encounters such
> unsuccessful response. This directive also breaks these variables:
> $request_body, $request_body_file. You should not use these variables any
> more while their values are undefined.
>
Hello,
This patch sounds exactly like what I need aswell!
I assume it works for both POST and PUT requests?
Thanks,
-- Pasi
> Hello!
> @yaoweibin
>
> If you are eager for this feature, you could try my
> patch: [2]https://github.com/taobao/tengine/pull/91. This patch has
> been running in our production servers.
>
> what's the nginx version your patch based on?
> Thanks!
> On Fri, Jan 11, 2013 at 5:17 PM, Ҋΰ±ó <[3]yaoweibin@gmail.com> wrote:
>
> I know nginx team are working on it. You can wait for it.
> If you are eager for this feature, you could try my
> patch: [4]https://github.com/taobao/tengine/pull/91. This patch has
> been running in our production servers.
>
> 2013/1/11 li zJay <[5]zjay1987@gmail.com>
>
> Hello!
> is it possible that nginx will not buffer the client body before
> handle the request to upstream?
> we want to use nginx as a reverse proxy to upload very very big file
> to the upstream, but the default behavior of nginx is to save the
> whole request to the local disk first before handle it to the
> upstream, which make the upstream impossible to process the file on
> the fly when the file is uploading, results in much high request
> latency and server-side resource consumption.
> Thanks!
> _______________________________________________
> nginx mailing list
> [6]nginx@nginx.org
> [7]http://mailman.nginx.org/mailman/listinfo/nginx
>
> --
> Weibin Yao
> Developer @ Server Platform Team of Taobao
> _______________________________________________
> nginx mailing list
> [8]nginx@nginx.org
> [9]http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> [10]nginx@nginx.org
> [11]http://mailman.nginx.org/mailman/listinfo/nginx
>
> --
> Weibin Yao
> Developer @ Server Platform Team of Taobao
>
> References
>
> Visible links
> 1. mailto:zjay1987@gmail.com
> 2. https://github.com/taobao/tengine/pull/91
> 3. mailto:yaoweibin@gmail.com
> 4. https://github.com/taobao/tengine/pull/91
> 5. mailto:zjay1987@gmail.com
> 6. mailto:nginx@nginx.org
> 7. http://mailman.nginx.org/mailman/listinfo/nginx
> 8. mailto:nginx@nginx.org
> 9. http://mailman.nginx.org/mailman/listinfo/nginx
> 10. mailto:nginx@nginx.org
> 11. http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx