On Wednesday 19 November 2014 16:56:41 attozk wrote:
> I am trying to understand the reason for buffering incoming request
> (client_body_buffer_size) in which nginx would either keep request in memory
> or write to file client_body_temp_path (based on request size).
>
> What are the performance advantages and/or technical challenges to such an
> approach as opposed to piping directly (even for smaller requests) to
> unbuffered piping to remote server e.g. http://tengine.taobao.org/
>
> Nginx allows disabling upstream (to client) buffering in which response is
> sent to the client synchronously while it is receiving it, why not the
> opposite is possible? What are the technical challenges/pros/cons of writing
> to disk (client_body_temp_path) or in-memory buffer
> (client_body_buffer_size)?
>
> Please share your thoughts.
>
[..]
I think the reasons are pretty good covered in this article:
http://www.aosabook.org/en/nginx.html
Clients are usually slow, backend's resources are usually expensive.
So nginx is trying to not keep backend busy while client is slowly
uploading data.
Support for unbuffered upload is in our roadmap:
http://trac.nginx.org/nginx/roadmap
It's not an easy feature to implement and requires significant
changes in nginx internals. Part of this work was done when
chunked transfer encoding for requests was introduced.
wbr, Valentin V. Bartenev
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx