Hi everyone, I just wanted to let the developers know about some error flags I discovered into Nginx 1.0.14, with debug mode disabled. Socket leaks and pread: 2012/03/24 23:29:52 10770#0: open socket #46 left in connection 5 2012/03/24 23:29:52 10770#0: open socket #115 left in connection 54 2012/03/24 23:29:52 10770#0: open socket #110 left in connection 107 2012/03/24 23:29:52 10770#by TECK - Nginx Mailing List - English
I'm trying to experiment with Webdav. So I've setup a simple configuration for that: http { ... client_body_buffer_size 128k; client_body_temp_path /var/lib/nginx/client 1 2; ... } server { listen 192.168.1.2:80; server_name webdav.domain.com; access_log /var/log/nginx/localhost.access.log main; error_log /var/log/nginx/localhost.error.log error; root /vaby TECK - Nginx Mailing List - English
Actually, in my case the fix was to add: fastcgi_param HTTPS on;by TECK - Nginx Mailing List - English
Hi, I'm trying to enable SSL for a specific directory only. In other words, the / directory is not encrypted, while the /protected is. server { listen 192.168.1.2:80 default_server; server_name www.domain.com; root /var/www/html; location / { try_files $uri $uri/ index.php?q=$uri&$args; } location /protected/ { rewrite ^ https://www.domain.com$request_uri?by TECK - Nginx Mailing List - English
Never mind, I was on the wrong server. Sorry about that.by TECK - Nginx Mailing List - English
Hi, I installed Nginx 1.0.9 and when I look at the phpinfo() details, it reports as version 1.0.4. Anyone else has this issue? Regards, Floren Munteanuby TECK - Nginx Mailing List - English
The fastcgi value is the name of my upstream. The idea is: once the location /somedir/file1.php is reached, everything in @cache should execute. In this way, I don't repeat several times the same code. This works: location /somedir/file1.php { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fastcgi; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PAby TECK - Nginx Mailing List - English
Hi all, I'm trying to call a location of this format: location = /somedir/file1.php { try_files @cache =404; } location = /anotherdir/file2.php { try_files @cache =404; } location @cache { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fastcgi; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_by TECK - Nginx Mailing List - English
Hi, I get 2 errors into nginx logs, constantly: * stalled cache updating, error:0 while closing request AND * upstream timed out (110: Connection timed out) while reading response header from upstream I use fastcgi as upstream, with next_upstream set to error only and send_timeout at 60s. Could you be kind and help me understand when these 2 errors occur? Thank you.by TECK - Nginx Mailing List - English
Hi, I'm running a very large site with Nginx 1.0.3 and PHP-FPM 5.3.6. I noticed a weird behavior on the fpm side: sometimes a worker decides to use 100% CPU for few seconds, before it dies. That makes the load go high and the site becomes unresponsive, even if Nginx cache and APC are used. Changing worker mode to static done not do anything. Do you know if there are any open bugs or patchby TECK - Php-fpm Mailing List - English
Thank you Igor, I will use the SCRIPT_FILENAME setup.by TECK - Nginx Mailing List - English
Hi, I'm having a bit of an issue with the rewrite scheme: server { ... location / { try_files $uri $uri/ @data; } location @data { rewrite ^ /data.php$is_args; internal; } ... } When I access this request: http://domain.com/information/feedback/?order=desc&sort=date I get an internal redirection cycle: *1 rewrite or internal redirection cycle while processby TECK - Nginx Mailing List - English
What I actually try to do is: force Nginx to show one IP to the MySQL server, no matter on what server the user is.by TECK - Nginx Mailing List - English
Hi all, I have set a fastcgi upstream into Nginx configuration: upstream fastcgi { ip_hash; server 192.168.0.2:9000; server 192.168.0.3:9000; server 192.168.0.4:9000; } Everything works perfect, except when I try to connect to MySQL, using PHP. $server = '192.168.0.40'; $username = 'tester'; $password = 'password'; mysql_connect($server, $username, $password);by TECK - Nginx Mailing List - English
Thanks Igor, you are right. It was a network issue.by TECK - Nginx Mailing List - English
OK, I finally got some error message into logs: 2011/04/26 13:44:39 5727#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 206.53.59.32, server: sandbox.domain.com, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://192.168.0.10:9000", host: "sandbox.domain.com" 2011/04/26 13:44:57 5727#0: *1 connect() failed (113: Nby TECK - Nginx Mailing List - English
Hi all, I noticed today some weird errors when I tried to refresh a basic php page (phpinfo() function). The page loads fine when I click refresh several times, then all of the sudden I get a dead page, no 404, no Nginx error, nothing. It looks like someone disconnected the network cable for a fraction of second. I checked both php and nginx logs, nothing is there. I even enabled the debug modby TECK - Nginx Mailing List - English
Ok, some updates on the testing. Using the configuration listed below does allow me to display index.php, but not index.html: http { ... upstream backend { server 192.168.0.2:8000; server 192.168.0.3:8000; server 192.168.0.4:8000; } upstream fastcgi { server 192.168.0.2:9000; server 192.168.0.3:9000; server 192.168.0.4:9000; } ... } server { listenby TECK - Nginx Mailing List - English
Hi all, I was looking into fastcgi_cache settings, in Nginx. Is it required to pass the header X-Accel-Expires, in order for Nginx to cache the data, or do I have set only the fastcgi_no_cache and fastcgi_cache_bypass values? fastcgi_pass_header Set-Cookie; fastcgi_pass_header X-Accel-Expires; <-- not needed, IMO fastcgi_no_cache $cookie_cache; fastcgi_cache_bypass $cookie_cache; Tby TECK - Nginx Mailing List - English
I think I understand now. This configuration will require that I have installed Nginx in all servers listed below: upstream proxy { server 1.localserverip:8000; server 2.localserverip:8000; server 3.localserverip:8000; server 4.localserverip:8000; } when in real life, I will have Nginx installed into 1.localserverip only. Presuming that I plan to add another balancer to the equation:by TECK - Nginx Mailing List - English
Thanks Francis. My goal is to install Nginx only into load balancer. I provided the graphical picture for sanity reasons, in case the text model would be understood. I will not use dual load balancers, like in the picture. At least not for now. Presuming that I want to add in the future more servers designated for nginx load balancing, then I will add the proxy configuration. What will happeby TECK - Nginx Mailing List - English
Thanks Igor. With your suggested configuration, can I use try_files as listed below? location / { root /var/www/html; try_files $uri $uri/ /index.php; } location /forum/ { try_files $uri $uri/ /forum/data.php$args; } location ~ \.php$ { fastcgi_pass php; include fastcgi.conf; }by TECK - Nginx Mailing List - English
Thanks for the reply, Francis. Let detail more the scenario, so you understand what I try to do. [ 1.localserverip (main load balancer, with nginx and php-fpm installed) ] | ---- + [ 2.localserverip (node with php-fpm installed) ] | ---- + [ 3.localserverip (node with php-fpm installed) ] | ---- + [ 4.localserverip (node with php-fpm installed) ] A better graphical example can be viewby TECK - Nginx Mailing List - English
In theory, all I have to do is this: location / { proxy_pass http://proxy; } location ~ \.php$ { fastcgi_pass php; include fastcgi.conf; } However, I have a script that is processed like that on a normal single server setup: location /forum/ { try_files $uri $uri/ /forum/data.php$args; } How would I make the try_files work with the load balancer scheme?by TECK - Nginx Mailing List - English
Thanks Igor. The goal is to have Nginx do the load balancing for multiple servers that serve the same static content and php files. Basically, I install Nginx in only one server that will be the load balancer, and PHP in all 4 servers that will contain the exact same thing (static files and php files): upstream proxy { server 1.localserverip:8000; server 2.localserverip:8000; server 3.localby TECK - Nginx Mailing List - English
Igor? Anyone could help me with this configuration? Thanksby TECK - Nginx Mailing List - English
Hi, I plan to do a basic load balancer setup and want to understand the differences between fastcgi_pass and proxy_pass. upstream proxy { server 1.localserverip:8000; server 2.localserverip:8000; server 3.localserverip:8000; server 4.localserverip:8000; } upstream php { server 1.localserverip:9000; server 2.localserverip:9000; server 3.localserveripby TECK - Nginx Mailing List - English
Thank you, Igor.by TECK - Nginx Mailing List - English
Hi all, I need your suggestions to create a rewrite rule in nginx that will redirect the user from an old dir to the new one: http://domain.com/olddir/some/dir/ to http://domain.com/newdir/some/dir/ location /newdir/ { try_files $uri $uri/ /newdir/index.php?$uri&$args; } location /olddir/ { rewrite ^ http://domain.com/newdir$request_uri? permanent; } The above rule does not wby TECK - Nginx Mailing List - English
Hi all, Right now I use this type of configuration: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; include fastcgi.conf; } My goal is to eliminate the regex on php file names: locationby TECK - Nginx Mailing List - English