Quote:
Note well: note that there is quite a different mode of proxy_pass
operation, proxy_pass with variables, which may use nginx's
internal async resolver. For this mode nginx won't try to
resolve hostnames during configuration parsing, and nginx will
start perfectly even when dns isn't available. But this
a) requires additional configuration (you have to configure ip of
your DNS server via resolver directive);
b) much more resource consuming;
c) internal nginx resolver known to have problems at least in
stable branch.
Therefore I can't recommend using it in production.
----
Yes, this is exactly the problem that I am having. I am using nginx to
proxy videos from other video servers, youtube and others. It works
great so long as resolution doesn't fail. Now, for youtube in
particular, normally I don't use the resolver anymore, I resolve in my
php app using a system nslookup command, request the url as an ip
based url, and set the Host header manually for the connection, such
that nginx connects to the upstream server with the appropriate host
header. This fairly well emulates a standard http request to the
domain name, without nginx doing resolution. I'm doing this for a
different reason than to work around the above problem, but it does
seem to work around the above problem as a side benefit. I can't at
all imagine though that this is an efficient solution (running a shell
command from within php to resolve the names, and storing the result
in mysql for future reference), so I'm only using it for youtube where
I need to, and not for other video sites that I access.
I guess it makes sense to use just ip based urls when your proxy pass
directive is accessing sites under your control, as you should know
what the ips are ahead of time, but in my case I am not, I am
accessing arbitrary urls this way. Are there plans to fix the locking
issue when the dns server becomes temporarily unavailable, or whatever
the current problems are with the async resolver? Or can we at least
kill nginx rather than having it stay in a zombified state? As you
said, having some requests randomly fail when the server appears to be
up is an administrative nightmare. Sometimes only one worker will fail
and the others will be fine, making it difficult to notice that you
need to restart nginx. Or at least have support for more than one
resolver line in the config so it can fail back to another resolver if
one is not responding. I would be happy to sponsor any or all of these
developments if anyone is interested in doing the work.
Since we're on the subject of proxy_pass, is there a way that, when
I'm doing something like this, and the proxied resource sends a 302,
to have nginx follow the 302 internally rather than sending the 302 to
the user's browser? I've had issues where I access a youtube url, and
need to forward the video to the user via proxy, but youtube sends a
302, nginx passes the 302 to the user rather than passing the video
(located at the 302'ed address) to the user. I currently try to follow
all redirects in the php app before passing the url off to nginx, but
this is complicated and doesn't work 100% of the time, so if there's a
way to configure nginx to internally follow these redirects, that
would be ideal.
-Gabe
On Thu, Oct 22, 2009 at 5:13 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
> Hello!
>
> On Thu, Oct 22, 2009 at 03:53:32PM -0400, masom wrote:
>
>> But shoulnd't nginx start anyway if the end point is not responding and just try to reach it anyway?
>>
>> I can't really see why it would need to stop or crash when either the endpoint (apache) or the dns system is unavailable.
>>
>> Yes it should display 5xx errors saying the endpoint is unreachable (dns or server failure / not-responding) but nginx should not "lock up" after 1 bad answer.
>>
>> Current problem:
>>
>> unit starts
>> dhcp kicks in
>> nginx get started before dhcp process is completed
>> nginx realize that content.dev.local is not reachable (dns settings are not yet set by dhcp)
>> nginx exits
>> Browser on unit starts, says address is unreachable (as nginx did not start).
>>
>>
>> Shouldn't nginx just attempt to connect to the end point as requests are coming in?
>
> Probably I'm not explained well enough.
>
> When nginx have something it may attempt to connect to - it will
> happily work. But in case of failed name resolution during
> configuration parsing it just don't have an ip.
>
> When you write in the config something like
>
> location /pass-to-backend/ {
> proxy_pass http://backend;
> }
>
> hostname "backend" is resolved during config parsing via standard
> function gethostbyname(). This function is blocking and therefore
> can't be used during request processing in nginx workers as it
> will block all clients for unknown period of time. So this
> function is only used during config parsing, hostname "backend"
> resolved to ip address[es], and later during request processing
> this ip is used without further DNS lookups.
>
> If "backend" can't be resolved during config parsing there are
> basically two options:
>
> 1. Work as is, always returning 502 when user tries to access uri
> that should be proxied. We have no ip to connect() to, remember?
>
> 2. Refuse to start, assuming administrator will fix the problem
> and start us normally.
>
> Option (1) probably better in situations where you have
> improperly configured system without any reliability implemented
> that have to start unattended at any cost and do at least
> something.
>
> But it's not really wise to do (1) in normal situation. It will
> basically start service in broken and almost undetectable state.
> Consider it's the part of big cluster - new node comes up, seems
> to work. But for some requests it returns errors for no reason.
> It's administrative nightmare.
>
> On the other hand, during reconfiguration, configuration testing,
> binary upgrade and other attended operations the only sensible
> thing to do is certanly (2). You wrote hostname in config that
> can't be resolved - it's just configuration error.
>
> Note well: note that there is quite a different mode of proxy_pass
> operation, proxy_pass with variables, which may use nginx's
> internal async resolver. For this mode nginx won't try to
> resolve hostnames during configuration parsing, and nginx will
> start perfectly even when dns isn't available. But this
>
> a) requires additional configuration (you have to configure ip of
> your DNS server via resolver directive);
>
> b) much more resource consuming;
>
> c) internal nginx resolver known to have problems at least in
> stable branch.
>
> Therefore I can't recommend using it in production.
>
>> The solution we consider is the hosts file that would always point to a static ip for the content server, but would be a little management problem as we are deploying in several different location with different networks.
>
> I don't really understand why not just impose correct
> prerequisites before starting nginx. It's not really hard to wait
> before network comes up.
>
> Maxim Dounin
>
>