Show all posts by user
Introduce yourselves
Page 1 of 1 Pages: 1
Results 1 - 25 of 25
Is there any way where we can configure nginx to only verify the root of the proxied HTTPS server (upstream server) certificate and to skip the host name (or domain name) verification?
As I understand, proxy_ssl_verify directive can be used to completely enable/disable the verification of proxied HTTPS server certificate but not selectively. Is there any directive to only disable the host name
by
shivramg94
-
Nginx Mailing List - English
Hi,
According to the documentation (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_verify) the directive "proxy_ssl_verify" is used to enable or disabled the verification of the proxied HTTPS server certificate. But it doesn't talk about what all different types of validations (like Host Name Verification, Certificate Expiry etc) it does.
Could someone list out
by
shivramg94
-
Nginx Mailing List - English
Thanks for the pointers.
For backend/upstream servers does they translate to the below two directives
For read :
proxy_read_timeout
For send:
proxy_send_timeout
Please correct me if I am wrong
by
shivramg94
-
Nginx Mailing List - English
Hi,
Is there any directive available in Nginx to set a timeout between two successive receive or two successive send network input/output operations during the HTTP request response phase?
by
shivramg94
-
Nginx Mailing List - English
Hi,
We are trying to configure TCP load balancing with TLS termination. But when we try to access the URL, we could see the below error in the nginx error and access logs
Nginx Error Log:
2018/07/04 07:16:45 7944#0: *61 SSL_do_handshake() failed (SSL: error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request) while SSL handshaking, client: XX.XXX.XX.XX, server: 0.0.0.0:443
by
shivramg94
-
Nginx Mailing List - English
Hi,
I have multiple upstream servers configured in an upstream block in my nginx configuration.
upstream example2 {
server example2.service.example.com:8001;
server example1.service.example.com:8002;
}
server {
listen 80;
server_name example2.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remot
by
shivramg94
-
Nginx Mailing List - English
Hi,
I have multiple upstream servers configured in an upstream block in my nginx configuration.
upstream example2 {
server example2.service.example.com:8001;
server example1.service.example.com:8002;
}
server {
listen 80;
server_name example2.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set
by
shivramg94
-
Nginx Mailing List - English
Hi,
We have been trying to upgrade the Nginx binary on the fly by invoking USR2 signal to spawn a new set of master and worker processes with the new configuration. The question I have is when this new master process is spawned, after issuing USR2 signal to the existing or the old master process, what would be it's parent process? Will it be the root process (1) or the old master process?
W
by
shivramg94
-
Nginx Mailing List - English
Just one quick question. Does Nginx check if the upstream servers are reachable via the specified protocol, during the reload process? If say, in this case the upstreams are not accepting ssl connections, will the reload fail?
by
shivramg94
-
Nginx Mailing List - English
I am trying to use nginx as a reverse proxy with upstream SSL. For this, I am using the below directive in the nginx configuration file
proxy_pass https://<upstream_block_file_name>;
where "<upstream_block_file_name>" is another file which has the list of upstream servers.
upstream <upstream_block_file_name> {
server <IP_address_of_upstream_server>:&
by
shivramg94
-
Nginx Mailing List - English
I am trying to use nginx as a reverse proxy with upstream SSL. For this, I am using the below directive in the nginx configuration file
proxy_pass https://<upstream_block_file_name>;
where "<upstream_block_file_name>" is another file which has the list of upstream servers.
upstream <upstream_block_file_name> {
server <IP_ad
by
shivramg94
-
Nginx Mailing List - English
Hi,
In one of the environment, we never tried to stop nginx. We see that the nginx master process and worker processes are running, but the pid file goes missing all of a sudden.
How can we explain that?
by
shivramg94
-
Nginx Mailing List - English
Hi,
In our environments we are intermittently facing an issue where the nginx.pid file goes missing, due to which whenever we try to do a reload of nginx, it fails saying "no pid file exists". Is there any known scenarios due to which the nginx.pid file goes missing?
Does the nginx reload has any effect on the pid file? Ideally it should not, isn't ?
Also, what we have noticed i
by
shivramg94
-
Nginx Mailing List - English
Earlier, it says the pid file doesn't exist even though the master and worker processes were running.
2017/05/12 15:35:41 19042#0: signal process started
2017/05/12 15:35:41 19042#0: open() "/u01/data/logs/nginx.pid" failed (2: No such file or directory)
Can the above issue ( where the nginx.pid file went missing) and the communication break up between the master and the worke
by
shivramg94
-
Nginx Mailing List - English
At times, the error logs say
2017/05/15 11:37:01 22229#0: signal process started
2017/05/15 11:37:02 22030#0: sendmsg() failed (32: Broken pipe)
2017/05/15 11:37:02 22030#0: sendmsg() failed (32: Broken pipe)
2017/05/15 11:37:04 22030#0: sendmsg() failed (9: Bad file descriptor)
2017/05/15 11:37:04 22030#0: sendmsg() failed (32: Broken pipe)
2017/05/15 11:37:04 22030#0: sendmsg() fai
by
shivramg94
-
Nginx Mailing List - English
Hi Maxim,
This is what I could find in the error logs
2017/05/15 11:32:18 21499#0: signal process started
2017/05/15 11:32:19 22030#0: sendmsg() failed (88: Socket operation on non-socket)
2017/05/15 11:32:19 22030#0: sendmsg() failed (32: Broken pipe)
2017/05/15 11:32:19 22030#0: sendmsg() failed (88: Socket operation on non-socket)
2017/05/15 11:32:19 22030#0: sendmsg() failed (32
by
shivramg94
-
Nginx Mailing List - English
I am facing an issue where once I issued a reload to the NGinX binary, few of the older worker processes are not dying. They still remain orphaned.
This is the configuration before issuing a reload :
$ ps -ef | grep nginx
poduser 12540 22030 0 06:39 ? 00:00:00 nginx: worker process
poduser 12541 22030 0 06:39 ? 00:00:00 nginx: worker process
poduse
by
shivramg94
-
Nginx Mailing List - English
I have an upstream block as follows
upstream sample{
server abc1.example.com down;
server abd2.example.com down;
}
Currently I get a 502 error. In this special case where I receive a 502 and all upstream servers are down I would like a receive a specific error page as temporarily unavailable.
How can i achieve that?
by
shivramg94
-
Nginx Mailing List - English
We have a persistent connection to Nginx on which we are issuing https requests. Now when we do a reload, the persistent connections (the requests which are already accepted) are failing as soon as the reload was issued. Those connections are being dropped. Is this the expected behavior?
In the Nginx documentation, it was mentioned that the older worker process would continue to run untile the
by
shivramg94
-
Nginx Mailing List - English
Hi All,
When we issue a reload to Nginx binary (<binary_location> -s reload), what are the steps involved inthe spawning of new set of worker processes?
Is it something like, while the older worker processes are still running or are serving in flight requests, Nginx spawns the newer worker processes and then brings down the older processes once they have served all the accepted reques
by
shivramg94
-
Nginx Mailing List - English
Thank Sergey, for you response.
I have one more question. If I have multiple upstream server host names in the upstream server block, then how can I specify the specific upstream server host name to which the request is being proxied, in the proxy_ssl_name directive?
by
shivramg94
-
Nginx Mailing List - English
Thank Sergey, for you response.
I have one more question. If I have multiple upstream server host names in the upstream server block, then how can I specify the specific upstream server host name to which the request is being proxied, in the proxy_ssl_name directive?
by
shivramg94
-
Nginx Mailing List - English
I am trying to implement HTTPS protocol communication at every layer of a proxying path. My proxying path is from client to load balancer (nginx) and then from nginx to the upstream server.
I am facing a problem when the request is proxied from nginx to the upstream server.
I am getting the following error in the nginx logs
2017/03/26 19:08:39 76753#0: *140 upstream SSL certificate do
by
shivramg94
-
Nginx Mailing List - English
I am trying to implement HTTPS protocol communication at every layer of a proxying path. My proxying path is from client to load balancer (nginx) and then from nginx to the upstream server.
I am facing a problem when the request is proxied from nginx to the upstream server.
I am getting the following error in the nginx logs
2017/03/26 19:08:39 76753#0: *140 upstream SSL certificate
by
shivramg94
-
New Member Introductions
I am trying to implement HTTPS protocol communication at every layer of a proxying path. My proxying path is from client to load balancer (nginx) and then from nginx to the upstream server.
I am facing a problem when the request is proxied from nginx to the upstream server.
I am getting the following error in the nginx logs
2017/03/26 19:08:39 76753#0: *140 upstream SSL certificate
by
shivramg94
-
New Member Introductions