Welcome! Log In Create A New Profile

Advanced

Re: [PATCH 2 of 4] Upstream: fixed usage of closed sockets with filter finalization

Maxim Dounin
January 29, 2024 10:08PM
Hello!

On Mon, Jan 29, 2024 at 05:21:43PM +0400, Sergey Kandaurov wrote:

>
> > On 29 Jan 2024, at 10:43, Maxim Dounin <mdounin@mdounin.ru> wrote:
> >
> > Hello!
> >
> > On Fri, Jan 26, 2024 at 04:26:00PM +0400, Sergey Kandaurov wrote:
> >
> >>> On 27 Nov 2023, at 06:50, Maxim Dounin <mdounin@mdounin.ru> wrote:
> >>>
> >>> # HG changeset patch
> >>> # User Maxim Dounin <mdounin@mdounin.ru>
> >>> # Date 1701049758 -10800
> >>> # Mon Nov 27 04:49:18 2023 +0300
> >>> # Node ID faf0b9defc76b8683af466f8a950c2c241382970
> >>> # Parent a5e39e9d1f4c84dcbe6a2f9e079372a3d63aef0b
> >>> Upstream: fixed usage of closed sockets with filter finalization.
> >>>
> >>> When filter finalization is triggered when working with an upstream server,
> >>> and error_page redirects request processing to some simple handler,
> >>> ngx_http_request_finalize() triggers request termination when the response
> >>> is sent. In particular, via the upstream cleanup handler, nginx will close
> >>> the upstream connection and the corresponding socket.
> >>>
> >>> Still, this can happen to be with ngx_event_pipe() on stack. While
> >>> the code will set p->downstream_error due to NGX_ERROR returned from the
> >>> output filter chain by filter finalization, otherwise the error will be
> >>> ignored till control returns to ngx_http_upstream_process_request().
> >>> And event pipe might try reading from the (already closed) socket, resulting
> >>> in "readv() failed (9: Bad file descriptor) while reading upstream" errors
> >>> (or even segfaults with SSL).
> >>>
> >>> Such errors were seen with the following configuration:
> >>>
> >>> location /t2 {
> >>> proxy_pass http://127.0.0.1:8080/big;
> >>>
> >>> image_filter_buffer 10m;
> >>> image_filter resize 150 100;
> >>> error_page 415 = /empty;
> >>> }
> >>>
> >>> location /empty {
> >>> return 204;
> >>> }
> >>>
> >>> location /big {
> >>> # big enough static file
> >>> }
> >>>
> >>> Fix is to set p->upstream_error in ngx_http_upstream_finalize_request(),
> >>> so the existing checks in ngx_event_pipe_read_upstream() will prevent
> >>> further reading from the closed upstream connection.
> >>>
> >>> Similarly, p->upstream_error is now checked when handling events at
> >>> ngx_event_pipe() exit, as checking p->upstream->fd is not enough if
> >>> keepalive upstream connections are being used and the connection was
> >>> saved to cache during request termination.
> >>>
> >>
> >> Setting p->upstream_error in ngx_http_upstream_finalize_request()
> >> may look suspicious, because it is used to be set on connection errors
> >> such as upstream timeout or recv error, or, as a recently introduced
> >> exception in the fastcgi module, - also when the FastCGI record ends
> >> prematurely, before receiving all the expected content.
> >> But technically I think this is quite correct, because we no longer
> >> want to receive further data, and also (and you mention this in the
> >> commit log) this repeats closing an upstream connection socket in
> >> the same place in ngx_http_upstream_finalize_request().
> >> So I think it should be fine.
> >
> > The biggest concern I personally see here is with the added
> > p->upstream_error check at ngx_event_pipe() exit. If there is a
> > real upstream error, such as when the connection is reset by the
> > upstream server, and if we want the pipe to be active for some
> > time (for example, if we want it to continue writing to the
> > downstream connection), there will be no ngx_handle_read_event()
> > call. For level-triggered event methods this means that the read
> > event for the upstream connection will be generated again and
> > again.
> >
> > This shouldn't be the problem for existing ngx_event_pipe() uses
> > though, as p->upstream_error is anyway triggers
> > ngx_http_upstream_finalize_request().
> >
> > Still, we can consider introducing a separate flag, such as
> > p->upstream_closed, or clearing p->upstream, and checking these in
> > ngx_event_pipe() instead. This probably would be a more clear
> > solution.
> >
> > Updated patch below:
> >
> > # HG changeset patch
> > # User Maxim Dounin <mdounin@mdounin.ru>
> > # Date 1706510064 -10800
> > # Mon Jan 29 09:34:24 2024 +0300
> > # Node ID 4a91a03dcd8df0652884ed6ebe9f7437ce82fd26
> > # Parent 7b630f6487068f7cc9dd83762fb4ea39f2f340e9
> > Upstream: fixed usage of closed sockets with filter finalization.
> >
> > When filter finalization is triggered when working with an upstream server,
> > and error_page redirects request processing to some simple handler,
> > ngx_http_request_finalize() triggers request termination when the response
> > is sent. In particular, via the upstream cleanup handler, nginx will close
> > the upstream connection and the corresponding socket.
> >
> > Still, this can happen to be with ngx_event_pipe() on stack. While
> > the code will set p->downstream_error due to NGX_ERROR returned from the
> > output filter chain by filter finalization, otherwise the error will be
> > ignored till control returns to ngx_http_upstream_process_request().
> > And event pipe might try reading from the (already closed) socket, resulting
> > in "readv() failed (9: Bad file descriptor) while reading upstream" errors
> > (or even segfaults with SSL).
> >
> > Such errors were seen with the following configuration:
> >
> > location /t2 {
> > proxy_pass http://127.0.0.1:8080/big;
> >
> > image_filter_buffer 10m;
> > image_filter resize 150 100;
> > error_page 415 = /empty;
> > }
> >
> > location /empty {
> > return 204;
> > }
> >
> > location /big {
> > # big enough static file
> > }
> >
> > Fix is to clear p->upstream in ngx_http_upstream_finalize_request(),
> > and ensure that p->upstream is checked in ngx_event_pipe_read_upstream()
> > and when handling events at ngx_event_pipe() exit.
> >
> > diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c
> > --- a/src/event/ngx_event_pipe.c
> > +++ b/src/event/ngx_event_pipe.c
> > @@ -57,7 +57,9 @@ ngx_event_pipe(ngx_event_pipe_t *p, ngx_
> > do_write = 1;
> > }
> >
> > - if (p->upstream->fd != (ngx_socket_t) -1) {
> > + if (p->upstream
> > + && p->upstream->fd != (ngx_socket_t) -1)
> > + {
> > rev = p->upstream->read;
> >
> > flags = (rev->eof || rev->error) ? NGX_CLOSE_EVENT : 0;
> > @@ -108,7 +110,9 @@ ngx_event_pipe_read_upstream(ngx_event_p
> > ngx_msec_t delay;
> > ngx_chain_t *chain, *cl, *ln;
> >
> > - if (p->upstream_eof || p->upstream_error || p->upstream_done) {
> > + if (p->upstream_eof || p->upstream_error || p->upstream_done
> > + || p->upstream == NULL)
> > + {
> > return NGX_OK;
> > }
> >
> > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
> > --- a/src/http/ngx_http_upstream.c
> > +++ b/src/http/ngx_http_upstream.c
> > @@ -4561,6 +4561,10 @@ ngx_http_upstream_finalize_request(ngx_h
> >
> > u->peer.connection = NULL;
> >
> > + if (u->pipe) {
> > + u->pipe->upstream = NULL;
> > + }
> > +
> > if (u->pipe && u->pipe->temp_file) {
> > ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> > "http upstream temp fd: %d",
> >
>
> Indeed, this fix looks more isolated, I like it.

Pushed to http://mdounin.ru/hg/nginx, thanks for the review.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel
Subject Author Views Posted

[PATCH 0 of 4] aio fixes

Maxim Dounin 523 November 26, 2023 10:10PM

[PATCH 3 of 4] Silenced complaints about socket leaks on forced termination

Maxim Dounin 104 November 26, 2023 10:10PM

Re: [PATCH 3 of 4] Silenced complaints about socket leaks on forced termination

Sergey Kandaurov 103 January 26, 2024 07:28AM

Re: [PATCH 3 of 4] Silenced complaints about socket leaks on forced termination

Maxim Dounin 91 January 29, 2024 02:32AM

[PATCH 2 of 4] Upstream: fixed usage of closed sockets with filter finalization

Maxim Dounin 111 November 26, 2023 10:10PM

Re: [PATCH 2 of 4] Upstream: fixed usage of closed sockets with filter finalization

Sergey Kandaurov 89 January 26, 2024 07:28AM

Re: [PATCH 2 of 4] Upstream: fixed usage of closed sockets with filter finalization

Maxim Dounin 85 January 29, 2024 01:44AM

Re: [PATCH 2 of 4] Upstream: fixed usage of closed sockets with filter finalization

Sergey Kandaurov 95 January 29, 2024 08:24AM

Re: [PATCH 2 of 4] Upstream: fixed usage of closed sockets with filter finalization

Maxim Dounin 90 January 29, 2024 10:08PM

[PATCH 4 of 4] AIO operations now add timers (ticket #2162)

Maxim Dounin 99 November 26, 2023 10:10PM

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

J Carter 95 January 08, 2024 06:28AM

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

J Carter 89 January 08, 2024 08:32AM

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

Maxim Dounin 94 January 09, 2024 01:00AM

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

J Carter 117 January 09, 2024 10:12AM

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

Sergey Kandaurov 131 January 26, 2024 07:28AM

Re: [PATCH 4 of 4] AIO operations now add timers (ticket #2162)

Maxim Dounin 91 January 29, 2024 02:32AM

[PATCH 1 of 4] Fixed request termination with AIO and subrequests (ticket #2555)

Maxim Dounin 131 November 26, 2023 10:10PM

Re: [PATCH 1 of 4] Fixed request termination with AIO and subrequests (ticket #2555)

Roman Arutyunyan 99 January 26, 2024 07:04AM

Re: [PATCH 1 of 4] Fixed request termination with AIO and subrequests (ticket #2555)

Maxim Dounin 108 January 29, 2024 03:00AM

Re: [PATCH 1 of 4] Fixed request termination with AIO and subrequests (ticket #2555)

Roman Arutyunyan 118 January 29, 2024 09:10AM

Re: [PATCH 1 of 4] Fixed request termination with AIO and subrequests (ticket #2555)

Maxim Dounin 123 January 29, 2024 10:08PM



Sorry, you do not have permission to post/reply in this forum.

Online Users

Guests: 134
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 500 on July 15, 2024
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready