OK. My nginx IP is 172.20.19.18. I need to add 'set_real_ip_from 172.16.0.0/12' and now it can replace remote_addr with proxy_protocol. Thanks a lot.by fengx - Nginx Mailing List - English
The config is rather simple as following. My test version is 1.7.2, a bit old. I can't upgrade to the latest one in our production for now. Anyway I think it should work in 1.7.2 because the document says proxy_protocol was introduced from 1.5.12. http { log_format combined '$proxy_protocol_addr - $remote_addr - $remote_user [$time_local] ' '"$request" $sby fengx - Nginx Mailing List - English
I have the setting as follow: real_ip_header proxy_protocol; real_ip_recursive on; set_real_ip_from 192.168.1.0/24; For example, when I send request to nginx from 10.0.0.1, $proxy_protocol_addr prints 10.0.0.1, which is the original client, but $remote_addr prints 192.168.1.1 which is our proxy.by fengx - Nginx Mailing List - English
Hello, I have enabled proxy_protocol like 'listen 8080 proxy_protocol' and can get the right client address from the $proxy_protocol_addr parameter. I also set 'real_ip_header proxy_protocol', but it don't change the $remote_addr parameter. It says 'The proxy_protocol parameter (1.5.12) changes the client address to the one from the PROXY protocol header. ' in the document http://nginx.orgby fengx - Nginx Mailing List - English
Hello, CJ Ess Both of cases, access log is disabled and error log is enabled with level ERROR. However, there only are a very few of errors in both cases, so I think it does not matter with the logging. Anyway, I will have another test with error log disabled later. Hello, tokers Yes, I will capture both of off-cpu and on-cpu flame graphs and share here later. Thanks.by fengx - Nginx Mailing List - English
There should been no blocking operation. At least, we have the same codebase and same sample data in the two test cases. In fact, our application is based on OpenResty with local redis instances. On the 32 cores server, we have 22 nginx workers and 8 local redis instances(shards). The lua codes should have rather simple business logic. It accepts http requests, loads data from local redis viaby fengx - Nginx Mailing List - English
We have CentOS 7 with kernel 3.10.0 in the production. As known, SO_REUSEPORT is introduced from kernel 3.9. I also had the test on my own laptop, Ubuntu 14, kernel 4.4 and got the similar result.by fengx - Nginx Mailing List - English
Hello It shows the new feature reusport from v1.9.1 can increase the QPS by 2-3 times than accept_mutex on and off in this article https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/. But the result is disappointed when we have the test in our production with V1.11.2.2. It don't even have the improvement but reduced, by 10%, dropped from 42K QPS(with accept_mutex off) to 38K QPS(wiby fengx - Nginx Mailing List - English
I read through the source codes and find the limit should be applied to each worker process. Right ? static void ngx_worker_process_init(ngx_cycle_t *cycle, ngx_int_t worker) { // ..... if (ccf->rlimit_nofile != NGX_CONF_UNSET) { rlmt.rlim_cur = (rlim_t) ccf->rlimit_nofile; rlmt.rlim_max = (rlim_t) ccf->rlimit_nofile; if (setrlimit(RLIMIT_NOFIby fengx - Nginx Mailing List - English
Hello I'm confused if the worker_rlimit_nofile directive is for total of all worker processes or single worker process? As I know, the worker_connections is for single worker process. Let's say if I have two worker processes and have worker_connections 512, then should I set worker_rlimit_nofile to 512 or 1024? Thanks Xiaofengby fengx - Nginx Mailing List - English
nice. it's clear. thanks.by fengx - Nginx Mailing List - English
Hi As known, the keepalive directive can activate the connections cache for upstream servers. I know the connection pool is in each worker process. But I'ms confused that the connection number is for each upstream server or is shared for all servers? It's documented at http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive Thanks Xiaofengby fengx - Nginx Mailing List - English