> You should check tcpdump (or wireshark) to see where actually 12.5MB
> of data have been stuck.
Wireshark confirms my assumption. All the data is buffered by nginx. More over, I see some buggy behavior, and I've seen that happen quite often.
This is localhost tcp screenshot: http://i.imgur.com/9Rz6Acs.png
You can see that after 1327 seconds nginx ACKed 18.5MB (which is 13.9MB/s). node actually writes at 20MB/s to the socket, node will internally buffer all unset data. At this point node stops sending any data and in 30 seconds nginx closes socket (at 1399s).
Then nginx goes on to deliver all the data that it got buffered and when it finishes sending 18.5M that it got from node before closing TCP connection it also closes connection to wget. Wget simply restarts file transfer with a new HTTP range request to download starting from 18.5MB, at this point you can see on this screenshot that around 1820sec nginx sends new GET request to node (that's the range get).
Here you can see outgoing packets from node around the same time when nginx closed socket to node at 1399sec: http://i.imgur.com/pdnDIFS.png
You can see that by this time remote (wget) ACKed exactly 14MB (as I run wget with 10KB/s rate limit).
So, without any tcp buffers involved nginx does buffer like 5MB of data. Moreover, when I review node->nginx packet capture, nginx clearly was reading full speed (20KB/s speed limit on node side) and at some point perhaps something triggered nginx to stop reading fullspeed. This happened at 391sec, at which point nginx ACKed 7.8MB (which is exactly 20KB/s). At the same time wget ACKed only 4MB, at this point nginx was buffering around 4MB and started to slow down read speed from node.
So, configs do not have any effect. What else should I check? Effectively, in this scenario nginx should also read from node at 10KB/s (plus some fixed buffer) and this doesn't seem to work properly in nginx.