We're having the same problem: upstream sent too big header while reading response header from upstream Where: upstream: "passenger:unix:/passenger_helper_server:" I tried every one of proxy_buffers, proxy_busy_buffers_size, proxy_buffer_size, fastcgi_buffers, fastcgi_busy_buffers_size, fastcgi_buffer_size, large_client_header_buffers. Isn't this error message comiby assistlydavid - Nginx Mailing List - English
That makes perfect sense, Maxim. I hadn't even thought about the HTTP 1.1 connections staying open. Thanks. David.by assistlydavid - Nginx Mailing List - English
Hi Maxim, Thanks for the response. How are the workers allocated work? I'm still curious as to why I'd see a large block of ELB-only traffic flushed to the log at the same time, rather than a mix. Any thoughts on that? Presumably, the chance of multiple workers flushing the same type of log messages (ELB-only) at the same time is very low, especially when we're seeing constant traffic fby assistlydavid - Nginx Mailing List - English
Hi there, We're running multiple nginx web servers on EC2 behind multiple ELBs (load balancers). I'm seeing strange behaviour in our nginx system logs. This strange behaviour seems to coincide with brief outages spotted by our external monitoring (chartbeat & new relic). I'm not sure whether I'm on the right track here, investigating this strange logging ... or if it's just a coincidenceby assistlydavid - Nginx Mailing List - English
![]() |
![]() |
![]() |
![]() |
![]() |