>> AFAIK, 2 different requests are served separately, meaning you can have
>> some requests sent when some other is being responded to.
>>
>> If you talk about the same request, then it is only sent to the next
>> upstream server when there is an 'unsuccessful attempt' at communicating
>> with the current upstream server. What defines this is told by the
>> *_next_upstream directives of the pertinent modules (for example
>> proxy_next_upstream
>> <http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream>
>> ).
>> That means that, by nature, there is no response coming back when the
>> request is tried on the next server.
>>
I am talking about how two successive requests (from client side) are handled on a same already established keepalive socket towards upstream server: On that same socket and towards the same upstream server, is it possible that the nginx upstream module starts sending the subsequent request before the current one is completely done (by done I mean the complete Content-Length is transferred to the client side)?