Hmm I understand that limitation. But an attacker or a bad application can hide the important information which we need to identify the source of the problem. What about limiting the fastcgi output to 1024 bytes and appending this info with max 1024 bytes. client: 127.0.0.1, server: example.com, upstream: "fastcgi://unix:/var/run/php-fpm-example.com.sock:", host: "127.0.0.1"by philipp - Nginx Mailing List - English
We have error logs like this: 2016/06/14 12:47:45 21036#21036: *378143 FastCGI sent in stderr: "PHP message: PHP Notice: Undefined index: model_name in /data/example.com/module/SalesFloor/view/partial/flyout/product.phtml on line 20 PHP message: PHP Notice: Undefined index: model_name in /data/example.com/module/SalesFloor/view/partial/flyout/product.phtml on line 21 PHP message: PHPby philipp - Nginx Mailing List - English
Thanks for your help, removing the bypass solved this issue for me. This feature request would simplify such configurations: http://forum.nginx.org/read.php?2,258604by philipp - Nginx Mailing List - English
Right now nginx oss doesn't have an upstream health check included. In order to compete with haproxy/varnish this would really help. Especially for https backends there is no appropiate 3rd party upstream health check module available.by philipp - Ideas and Feature Requests
In order to solve this issue http://forum.nginx.org/read.php?2,255421,255438#msg-255438 two additional features would be cool: proxy_cache_min_size Syntax: proxy_cache_min_size number; Default: proxy_cache_min_size 1; Context: http, server, location Sets the minimal size in bytes for a response to be cached. proxy_cache_max_size Syntax: proxy_cache_max_size number; Defaulby philipp - Nginx Mailing List - English
Hi Maxim, should this solution work? http://syshero.org/post/49594172838/avoid-caching-0-byte-files-on-nginx I have created a simple test setup like: map $upstream_http_content_length $flag_cache_empty { default 0; 0 1; } server { listen 127.0.0.1:80; server_name local; location /empty { return 200 ""; } location /full {by philipp - Nginx Mailing List - English
I use nginx 1.4.1 with gunzip = on and gzip_vary = on. This leads to a duplicate Vary Header. gzip_vary should do nothing if the header is already present: user@aladin:~$ curl -I -A test http://192.168.56.249/ HTTP/1.1 302 Found Server: nginx Date: Tue, 25 Jun 2013 06:45:12 GMT Content-Type: text/html Connection: keep-alive Vary: Accept-Encoding Location: index.htm Vary: Accept-Encodby philipp - Nginx Mailing List - English
My log format looks like this log_format vcombined '$host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '$ssl_cipher $request_time $gzip_ratio ' '$upstream_aby philipp - Nginx Mailing List - English
Is it possible to limit the amount of upstreams asked? I have four upstreams defined and it makes no sense to ask all of them. If two of them timeout or error there is possible something wrong with the request and asking another node doesn't help.by philipp - Nginx Mailing List - English
Yes the latest patch (58) works fine.by philipp - Nginx Mailing List - English
Du kannst auch einfach den Upstream Server die Kompression überlassen und dann einfach die gezippten Inhalte cachen. Ich normalisiere dazu zuerst den encoding header damit ich maximal drei Varianten im Cache habe: set $normal_encoding ""; # Normalize deflate encoding if ($http_accept_encoding ~* deflate) { set $normal_encoding "deflate"; } # Normalize gzip encoby philipp - German Forum
I have patched the nginx sources with the latest spdy patch: /usr/src/nginx-1.3.11# patch -p1 < patch.spdy-55_1.3.11.txt but building the package isn't possible anymore: dpkg-buildpackage -rfakeroot -uc -b ... gcc -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/htby philipp - Nginx Mailing List - English
Thanks for your help, I guess I found the problem... I had two vhosts with ocsp. But only one host had a working trusted certificate.by philipp - Nginx Mailing List - English
I have created a trust file both ways: cat www.hellmi.de.pem > www.hellmi.de.trust cat subca.pem >> www.hellmi.de.trust cat ca.pem >> www.hellmi.de.trust or cat subca.pem > www.hellmi.de.trust cat ca.pem >> www.hellmi.de.trust and configured it as ssl_trusted_certificate, this did not help either. How do I create a trusted certificate for a StartCom CA? Tby philipp - Nginx Mailing List - English
I tried nginx 1.3.10 with ocsp stapling... but I get this error: 2013/01/09 09:14:52 27663#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: ocsp.startssl.com my config looks lile this server { listen [::]:443 ssl spdy;by philipp - Nginx Mailing List - English
This seems to work once the file is stored in the proxy_cache. Have you found a solution for this problem?by philipp - How to...
Hello, we have a bunch of servers which are configured 100% equal expect of host specific settings like ip address in the listener using chef/puppet. Nginx seems to read include / config files not in the same order on each server. For example we haven't defined a default vhost on each server... so nginx uses the first loaded file which is exampleA.com on server 1 and exampleB.com on server 2by philipp - Nginx Mailing List - English
ok this seems to be the solution: real_ip_header X-Forwarded-For; set_real_ip_from 10.0.0.0/8;by philipp - Ideas and Feature Requests
any news here? Is the latest patch stable?by philipp - Nginx Mailing List - English
I would like to use the geo ip module behind Amazon Elastic Load Balancing. In this case the client ip address is stored in the x forwarded for header. Is it possible to use the header for geo ip lookup?by philipp - Ideas and Feature Requests
nginx seems to open the error log first without consulting the config file which has definied another log file location. I would like to run mulitple instances of nginx on the same server, each instance has a own user. Each instance has a special error log. At present nginx startup reports an error nginx: could not open error log file: open() "/SRV/NGX/1.0.8/logs/error.log" failedby philipp - Ideas and Feature Requests
I have the same problem here. How can we fix it? Is this problem known by the developers? Caching in general is working but without having a working cache manager I am afraid of running out of disk space...by philipp - Nginx Mailing List - English
@Jim thanks for your hint, I am already using this kind of solution. But I'd rather like to generalise my approach. Cause my web project is in a svn repository I can cache all files in case they have a revision param. I do not want to specify file types. Maybe this is a feature request? @jayi sour idea looks good but it won't be possible to cache 404 pages based on the response type... Causeby philipp - How to...
Hi Jim, thanks for your fast response. I have tried your config example but nginx (0.7.65) does support proxy_cache_... options within if statements. :( Reloading nginx configuration: : "proxy_cache_valid" directive is not allowed here in /etc/nginx/sites-enabled/www.example.com:117 My conf looks like this: cat /etc/nginx/conf.d/proxy_cache.conf proxy_cache_path /vaby philipp - How to...
Hello, we have a webapp with static content like these files: http://www.example.com/images/banner_top.jpg?rev=102 http://www.example.com/images/banner_bottom.jpg?rev=102 http://www.example.com/login.html?rev=102 http://www.example.com/logout.html?rev=113 this is our nginx config: server { listen 80; servername www.example.com; # Dynamic contentby philipp - How to...
![]() |
![]() |
![]() |
![]() |
![]() |