Welcome! Log In Create A New Profile

Advanced

Nginx stops sending request body when upstream server consuming body while returning data

Posted by ling 
Nginx stops sending request body when upstream server consuming body while returning data
September 09, 2019 11:27AM
Hi there,

I use Nginx as a reverse proxy. My client sends 20M data to Nginx, and Nginx received all of them (I can view 20M file in /var/lib/nginx). But when Nginx tries to proxy the 20M data to upstream server, only half of them are sent. My server's function is reading a file while returning the encoded file. Server starts to reply before it receives all of the request body for efficiency purpose.

The debug_log shows: writev() not ready (11: Resource temporarily unavailable) while writing to upstream server
Following log is where Nginx stops sending data to upstream server:
2019/09/09 11:02:47 [debug] 15570#0: *1 http body new buf t:1 f:0 00005654FB2DE420, pos 00005654FB2DE420, size: 8192 file: 0, size: 0
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer buf fl:1 s:8192
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer in: 00005654FB28D800
2019/09/09 11:02:47 [debug] 15570#0: *1 writev: 8192 of 8192
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer out: 0000000000000000
2019/09/09 11:02:47 [debug] 15570#0: *1 http read client request body
2019/09/09 11:02:47 [debug] 15570#0: *1 recv: eof:0, avail:1
2019/09/09 11:02:47 [debug] 15570#0: *1 recv: fd:3 8192 of 8192
2019/09/09 11:02:47 [debug] 15570#0: *1 http client request body recv 8192
2019/09/09 11:02:47 [debug] 15570#0: *1 http body new buf t:1 f:0 00005654FB2DE420, pos 00005654FB2DE420, size: 8192 file: 0, size: 0
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer buf fl:1 s:8192
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer in: 00005654FB28D800
2019/09/09 11:02:47 [debug] 15570#0: *1 writev: 8192 of 8192
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer out: 0000000000000000
2019/09/09 11:02:47 [debug] 15570#0: *1 http read client request body
2019/09/09 11:02:47 [debug] 15570#0: *1 recv: eof:0, avail:1
2019/09/09 11:02:47 [debug] 15570#0: *1 recv: fd:3 8192 of 8192
2019/09/09 11:02:47 [debug] 15570#0: *1 http client request body recv 8192
2019/09/09 11:02:47 [debug] 15570#0: *1 http body new buf t:1 f:0 00005654FB2DE420, pos 00005654FB2DE420, size: 8192 file: 0, size: 0
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer buf fl:1 s:8192
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer in: 00005654FB28D800
2019/09/09 11:02:47 [debug] 15570#0: *1 writev: 4800 of 8192
2019/09/09 11:02:47 [debug] 15570#0: *1 writev: -1 of 3392
2019/09/09 11:02:47 [debug] 15570#0: *1 writev() not ready (11: Resource temporarily unavailable)
2019/09/09 11:02:47 [debug] 15570#0: *1 chain writer out: 00005654FB28D800
2019/09/09 11:02:47 [debug] 15570#0: *1 http read client request body
2019/09/09 11:02:47 [debug] 15570#0: *1 event timer del: 10: 1044869538
2019/09/09 11:02:47 [debug] 15570#0: *1 event timer add: 10: 60000:1044879558
2019/09/09 11:02:47 [debug] 15570#0: timer delta: 9736
2019/09/09 11:02:47 [debug] 15570#0: worker cycle
2019/09/09 11:02:47 [debug] 15570#0: epoll timer: 60000
2019/09/09 11:02:47 [debug] 15570#0: epoll: fd:3 ev:0005 d:00007FD0DDF9C1E0
2019/09/09 11:02:47 [debug] 15570#0: *1 http run request: "/dbsec/encodeData?"
2019/09/09 11:02:47 [debug] 15570#0: *1 http upstream read request handler
2019/09/09 11:02:47 [debug] 15570#0: *1 http upstream send request
2019/09/09 11:02:47 [debug] 15570#0: *1 http upstream send request body
2019/09/09 11:02:47 [debug] 15570#0: *1 http read client request body
2019/09/09 11:02:47 [debug] 15570#0: *1 event timer: 10, old: 1044879558, new: 1044879622
2019/09/09 11:02:47 [debug] 15570#0: *1 http run request: "/dbsec/encodeData?"
2019/09/09 11:02:47 [debug] 15570#0: *1 http upstream check client, write event:1, "/dbsec/encodeData"
2019/09/09 11:02:47 [debug] 15570#0: epoll: fd:10 ev:0001 d:00007FD0DDF9C2C8
2019/09/09 11:02:47 [debug] 15570#0: *1 http upstream request: "/dbsec/encodeData?"
2019/09/09 11:02:47 [debug] 15570#0: *1 http upstream process header
2019/09/09 11:02:47 [debug] 15570#0: *1 malloc: 00005654FB28DE80:4096
2019/09/09 11:02:47 [debug] 15570#0: *1 recv: eof:0, avail:1
2019/09/09 11:02:47 [debug] 15570#0: *1 recv: fd:10 4096 of 4096
2019/09/09 11:02:47 [debug] 15570#0: *1 http proxy status 200 "200 OK"
2019/09/09 11:02:47 [debug] 15570#0: *1 http proxy header: "Date: Mon, 09 Sep 2019 15:02:37 GMT"
2019/09/09 11:02:47 [debug] 15570#0: *1 http proxy header: "Content-Type: application/octet-stream"
2019/09/09 11:02:47 [debug] 15570#0: *1 http proxy header: "Transfer-Encoding: chunked"
2019/09/09 11:02:47 [debug] 15570#0: *1 http proxy header: "Server: Jetty(9.4.8.v20171121)"
2019/09/09 11:02:47 [debug] 15570#0: *1 http proxy header done
2019/09/09 11:02:47 [debug] 15570#0: *1 HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 09 Sep 2019 15:02:47 GMT
Content-Type: application/octet-stream
Transfer-Encoding: chunked
Connection: keep-alive


Does someone know what happens here and why write stops? Thanks in advance!
It's an old jetty. The current version is 9.4.20, so you may consider upgrading.

In any case, however, Jetty limits the size of uploads by default. You will have to adjust this parameter to accomodate whatever makes sense.

https://www.eclipse.org/jetty/documentation/current/setting-form-size.html

Depending on which framework is used for the application on the Jetty side, there may be additional upload size limitations (e.g., in Spring Boot).

The jetty log should be able to tell you what is happening.

--j.
Hi j,

Thanks for your reply!
I tried to update jetty version to 9.4.19, but still got same result(9.4.20 causes error in my environment). I think it's not caused by Jetty, because when client sends request to server directly without a proxy, it can process 1G data.

Also, inspired by thread: https://forum.nginx.org/read.php?2,227175,227184#msg-227184, I found that on server side, if I don't return any encoded data back to client, all of client's request body are proxied to upstream server.
Do you think it's something that can be configured in Nginx? Thanks!

Ling
Ok, then let's start with the usual suspects: did you set client_max_body_size?
Yes, currently my location configurations is like this:

location / {

proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_request_buffering off;
proxy_pass http://127.0.0.41:8108/;

client_max_body_size 0;
}

Client sends chunked encoding data to Proxy, then Proxy also sends chunked encoding data to server
Previously, I set didn't set proxy_request_buffering off, so Proxy sends the entire body to server with "Content-Length" in header to server. But proxy can't send all of request body to server in both situations.



Edited 1 time(s). Last edit at 09/10/2019 02:54PM by ling.
Hi j,

I found something interesting, when I set:
proxy_request_buffering off;
proxy_buffering off;
Ngnix always proxied about 10M data.
But when I set :
proxy_request_buffering on;
proxy_buffering off;
Nginx always proxied about 18M data.I guess the issue is about request buffering.

Do you have any suggestions? Thanks!
With proxy_request_buffering on, I also tried to increase client_body_buffer_size to 50M but still only 18M get proxied.
Out of curiosity, I set up a minimal scenario to reproduce your effect. I put this segment into my http section of nginx.conf:

server {
listen 127.0.0.1:8080;
client_body_temp_path /tmp/nginx;
client_body_in_file_only clean;
client_body_buffer_size 1m;
client_max_body_size 0;
location = /send {
proxy_http_version 1.1;
proxy_request_buffering on;
proxy_cache off;
proxy_pass_request_body on;
proxy_pass http://127.0.0.1:8081;
proxy_buffering off;
proxy_redirect off;
}
}

Then I would set up a netcat on port 8081 to see how much would arrive there:

nc -l -o /tmp/in 8081 > /dev/null

Create a file with random data of size 30 MB:

dd if=/dev/urandom of=/tmp/bigstuff bs=1024 count=30720

After starting NGINX, run a curl to post this file:

curl -v -X PUT --data-binary "@/tmp/bigstuff" http://127.0.0.1:8080/send

You will see that the netcat shows a file of 30 MB plus around 176 bytes. That's the HTTP header.

Can you verify this with your setup?

If this should work, it rules out NGINX as the culprit not handling files of this size. There may still be something in the communication between the components. If you try curl with -d instead of --data-binary, it will hang at some point because the default is --data-ascii. If you have size limitations in Jetty or your receiving framework, you may also run into trouble.

--j.
Hi j,

Thanks very much for your reply! I configured my Nginx with your configuration, then I run netcat to check received data as you suggested. As a result I did see 30M data are sent to 8081. But I think Nginx works in this situation because the server(8081) doesn't send any data back at the same time. As I mentioned perviously, on my server side, when I don't return any data back, Nginx can also proxy all of data to upstream server.

An interesting thing is when I use curl limit-rate option to slow down the speed, everything works perfect(30M encoded data are received back from server).

Thanks,
Ling
That's reassuring in a way :-)

It sounds to me you are trying with HTTP/1.1 something you would normally do with HTTP/2.
When the server sends a response (and a status code), NGINX may rightfully stop sending more stuff to the server. Can you tell Jetty to send its reply (including the status code) only AFTER all data has been received?

If a final status code is delivered (probably with a Connection: close header), it is fine to close the request channel as well.

My guess is you should look at HTTP/2 or Websockets.

--j.
Hi j,

Yes, Jetty can send data back after it received all data, but the reasons why I want to use current design are
1. No buffering on server side
2. Performance would be much better if sending and receiving happen at the same time.

I guess your suggestion is right, I probably should look into HTTP2 and Websockets. Thanks again for your help! :)

Ling
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 121
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready