Hi, I am looking for a way to use nginx in order to inspect a response body and then return the response to the client's GET request based on the result of inspection. I have investigated the sub module filter http://nginx.org/en/docs/http/ngx_http_sub_module.html, but the pattern implemented cannot be reused for my purpose. I also investigated the example module https://www.nginx.com/resourceby hkahlouche - How to...
To make the question simpler: Is it possible to set a variable per request before the NGINX rewrite phase and use that variable in the access logs? Is this IMPOSSIBLE in NGINX? Setting a variable requires nginx to go through the rewrite phase. If the request processing stops before that phase, this variable will never be set. This happens, for instance, when you send a request without end-oby hkahlouche - How to...
Is there a way to set user-defined variables and use them in access logs before the NGINX rewrite phase? In some error scenarios, like the one defined below, we end-up in the access log phase before any user variable is set. The following is the access log format example I have: log_format main '$remote_addr $server_addr $http_host $custom_destination_addr [$custom_requestby hkahlouche - How to...
Is there a way to get the line number and the file name when the nginx configuration test fails. nginx -t I am getting the following error, but it doesn't say anything about which config file and line number this error comes from: nginx: ENGINE_load_private_key("385.1") failed (engine routines:ENGINE_load_private_key:failed loading private key) I have a file for each server bby hkahlouche - How to...
I have an NGINX configuration with both HTTP and HTTPS traffic server blocks. Below is the HTTPS server block configuration snippet that is causing the problem. server { listen 10.1.1.5:443 default ssl; listen 10.1.1.6:8080; server_name myservice.traffic.dns.tmp; ssl_certificate /etc/config/ssl/myservice.traffic.cert.pem; ssl_certificatby hkahlouche - How to...
>> AFAIK, 2 different requests are served separately, meaning you can have >> some requests sent when some other is being responded to. >> >> If you talk about the same request, then it is only sent to the next >> upstream server when there is an 'unsuccessful attempt' at communicating >> with the current upstream server. What defines this is told by theby hkahlouche - Nginx Mailing List - English
Hello I would like to go back to this item: >> Yes, nginx will process requests one-by-one and won't pipeline >> requests to upstream. Can you please confirm, if no new request is sent to the upstream before the entire response is received for the ongoing request (ongoing request finished)? In other words, is possible that upstream module sends the next request to upstreamby hkahlouche - Nginx Mailing List - English
Hello, > I'm talking about upstream server, not the "server" directive in > the "upstream" block. Assuming you are using nginx as an upstream > server you should use keepalive_requests. We are not using nginx on the upstream side (we have some legacy server), this is why I was looking for keepalive_requests on the upstream side, or something to better controlby hkahlouche - Nginx Mailing List - English
> Yes, nginx will process requests one-by-one and won't pipeline > requests to upstream. So, you confirm that the current implementation of nginx doesn't pipeline towards upstream, and there is no way to enable that functionality? > No, it's not something currently implemented. It's not considered > needed as upstream servers can be easily configured to do this > insteby hkahlouche - Nginx Mailing List - English
Thanks for your prompt response. Let's a client is sending pipelined requests on the client side and nginx has multiple upstream keepalive connections. Are you saying that NGINX will NOT pipeline on upstream side even though it is receiving pipelined requests on client side? Is there a way to close an upstream keepalive after a threshold of requests is reached (*max requests") same as keby hkahlouche - Nginx Mailing List - English
Hi, Does anyone know a way to disable HTTP request pipelining on a same upstream backend connection? Let's say we have the below upstream backend that is configured with keepalive and no connection close: upstream http_backend { server 127.0.0.1:8080; keepalive 10; } server { ... location /http/ { proxy_pass http://http_by hkahlouche - Nginx Mailing List - English
Hi, Does anyone know a way to disable HTTP request pipelining on a same upstream backend connection? Let's say we have the below upstream backend that is configured with keepalive and no connection close: upstream http_backend { server 127.0.0.1:8080; keepalive 10; } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_veby hkahlouche - How to...
I am using NGINX 1.9.13 with lua-resty-http. The CPU load is too hight when making HTTP requests via the lua http resty module (https://github.com/liseen/lua-resty-http) : PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8446 root 20 0 209m 142m 1748 R 99.1 3.7 0:31.27 nginx The more the body of the request is big, the higher the CPU. In this case the bodby hkahlouche - How to...
Hi, I am trying to get access to a custom response header, save it into a variable, and get rid of it so that it will not be proxied back to the client. That variable is then used in the access logs. Unfortunately, the following doesn't seem to work: server { listen 142.133.151.129:8090 default; ##Intial values for calculated access log variables set $cache_status &qby hkahlouche - How to...