Welcome! Log In Create A New Profile

Advanced

Re: Revisiting 100-continue with unbuffered proxying

Maxim Dounin
May 01, 2021 03:32AM
Hello!

On Sat, May 01, 2021 at 12:38:30AM -0400, kbolino wrote:

> Use case: Large uploads (hundreds of megabytes to tens of gigabytes) where
> nginx is serving as a reverse proxy and load balancer. The upstream servers
> can get bogged down, and when they do, they apply backpressure by responding
> with 503 status code.
>
> Problem: Naively implemented, the client sends the entire request body off
> to the server, then waits to find out that the server can't handle the
> request. Time and network bandwidth are wasted, and the client has to retry
> the request.
>
> Partial solution: Using an idempotent request method, with
> "proxy_request_buffering on", and "proxy_next_upstream http_503", nginx will
> accept the upload from the client once, but try each server in succession
> until one works. Fortunately, nginx will set header "Expect: 100-continue"
> on each proxied request and will not send the request body off to an
> upstream server that isn't ready to receive it. However, nginx won't even

No, this is not how it works: nginx never use "Expect:
100-continue" on requests to backends. It is, however, smart
enough to stop sending the body as long as the backend server
responds with an error, so (almost) no bandwidth is wasted.

This is, actually, what HTTP specification suggests that all
clients should be doing
(https://tools.ietf.org/html/rfc7230#section-6.5):

A client sending a message body SHOULD monitor the network connection
for an error response while it is transmitting the request. If the
client sees a response that indicates the server does not wish to
receive the message body and is closing the connection, the client
SHOULD immediately cease transmitting the body and close its side of
the connection.

The most simple solution would be to fix the client to do the
same.

> begin to send a proxied request to any upstream server until the initial
> request body upload from the client has completed. Also, the entire request
> body has to be stored somewhere local to nginx and the speed of that storage
> has a direct impact on the performance of the whole process.
>
> Next solution idea: Have the *client* set header "Expect: 100-continue".
> Then the client won't send the request body until nginx can find an upstream
> server to handle the request. However, this is not how things work today.
> Nginx will unconditionally accept the request with "100 Continue" regardless
> of upstream server status. With buffering enabled, this makes sense, since
> nginx wants to aggressively buffer the request body so it can re-send it if
> needed.
>
> Refined solution idea: Disable buffering. Unfortunately, while setting
> "proxy_request_buffering off" and "proxy_http_version 1.1" does disable
> buffering, it doesn't disable nginx from immediately telling the client "100
> Continue". Moreover, nginx only tries one upstream server before giving up,
> probably because it has no buffered copy of the request body to send to the
> next server on behalf of the client. Yet if nginx delayed sending "100
> Continue" back to the client, it could have taken a little bit more time to
> find a viable upstream server.
>
> I did some digging before bringing this topic up, and I find a proposed
> patch
> (http://mailman.nginx.org/pipermail/nginx-devel/2016-August/008736.html), a
> request in the forum
> (https://forum.nginx.org/read.php?2,212533,212533#msg-212533), and a trac
> ticket (https://trac.nginx.org/nginx/ticket/493) all to disable automatic
> handling of the 100-continue mechanism. The trac ticket was closed because
> unbuffered upload was not supported yet, the patch was rejected because it
> sounded like it was the other side's problem to solve, and finally the forum
> request was rejected because nginx was "designed as [an] accelerator to
> minimize backend interaction with a client".

Just in case, the patch simply makes nginx to ignore the "Expect:
100-continue" header from the client, it won't make nginx to pass
the header to backend servers, or to accept 100 (Continue)
responses from backends.

> As to that last quoted part, I agree! I'd rather have nginx figure things
> out than to have the client finagle with the backend server too much. So
> here's what I think should happen. First, the client's Expect header should
> not get directly passed on to the upstream server nor should nginx ignore
> the header entirely (i.e., keep these things the same as they are today).
> Instead, with unbuffered upload, an upstream block with multiple servers,
> and proxy_next_upstream set to try another server when one fails:
>
> 1. Client sends request with "Expect: 100-continue" to nginx
> 2. Nginx receives request but does not respond with anything yet
> 3. Nginx tries the first eligible server by sending a proxied request with
> "Expect: 100-continue" (not passthrough; this is nginx's own logic and this
> part exists today as far as I can tell)
> 4. If the server responds "100 Continue" to nginx *then* nginx responds "100
> Continue" to the client and the unbuffered upload proceeds to that server
> 5. If instead the server fails in a way that proxy_next_upstream is
> configured to handle, then nginx still doesn't respond to the client, and
> now tries to reach the next eligible server instead.
> 6. This process proceeds until a server willing to accept the request is
> found or all servers have been tried and none are available, at which point
> nginx sends an appropriate non-100 response to the client (502/503/504).
>
> (Caveat: If an upstream server fails *after* already accepting a request
> with "100 Continue", then nginx still has to give up since there's no
> buffering.)
>
> Thoughts? I know there are other ways to solve this problem (e.g. S3-style
> multipart uploads), but there is a convenience to the "Expect: 100-continue"
> mechanism and it is pretty widely supported. I don't think this goes against
> the grain of what nginx is trying to be, especially since unbuffered uploads
> are supported now.

While something like this might be more efficient than what we
currently have, as of now there is no infrastructure in nginx to
handle intermediate 1xx responses from backends (and to send them
to clients), so it will be not trivial to implement this.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

Revisiting 100-continue with unbuffered proxying

kbolino May 01, 2021 12:38AM

Re: Revisiting 100-continue with unbuffered proxying

Maxim Dounin May 01, 2021 03:32AM

Re: Revisiting 100-continue with unbuffered proxying

kbolino May 01, 2021 01:09PM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 298
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready