So update - it's not internal to external. It appears to be that if I have two nginx instances with the same ssl profiles for server and cert and call between them, a client cert is not sent. But to any other target (even nginx with different profile), they do get sent. Why don't they send between themselves? Help!by bdmesh - Other discussion
We're having problems with getting NGINX to consistently send client certs. We have a scenario: Caller -> NGINX1 -> NGINX2 -> Origin In this scenario, NGINX1 is to pass a client cert to NGINX2. For testing, NGINX1 and NGINX2 were actually two different location blocks on the same local box (running in a alpine docker container) and the proxy target in the first location block wasby bdmesh - Other discussion
Yeah, we've thought of that - logs on the backend aren't showing the calls come in until the delay is finished. We have other mirror instances of nginx (with same configs) that succeed the whole time. We also have test scripts that are curling the same backend service via the "working" nginx proxy, the "flaky" nginx proxy", and the backend service directly. Those loop mby bdmesh - Other discussion
Thanks - I'll try to get a debug version up to check things out... the unfortunate part is if I touch one of the existing repros, it immediately temporarily resolves. I made a change to my config to add another mapping to a test service and just doing the "nginx -s reload" made the repro just go away. I've now reset all repos but one and want to leave it alone until the others starby bdmesh - Other discussion
Thanks ittp2012, How do you mean exactly? A DNS issue would cause it to consistently oscillate back and forth between working and not working with each call? The speed/timing of calls doesn't seem to have any bearing on the back and forth nature of the response. So are you suggesting that somehow nginx just doing the DNS lookup somehow effects the results coming back? What would I look forby bdmesh - Other discussion
Hi, We're using NGINX as a reverse proxy and are having difficulty getting consistent throughput. The service we have stood up behind the proxy is a NodeJS service running on CoreOS on an AWS ec2 instance, sitting behind an ELB. For the proxy, we're using NGINX as part of KONG running on a CoreOS Docker container. The version is openresty/1.9.3.1. We're using a simply proxy_pass to routeby bdmesh - Other discussion