essence of the other tow ansers: http://dgtool.blogspot.de/2013/02/nginx-as-sticky-balancer-for-ha-using.html you might want to google "nginx sticky sessions"by mex - Nginx Mailing List - English
let your app handle and deliver error-pagesby mex - Nginx Mailing List - English
did you tried to --turn it off and on again -- check it w/out the Rewrite-Stuff in your apache-config? Where did you got that snippet from? your RewriteBase looks fishyby mex - Nginx Mailing List - English
MISS means the ressouce is not found in the cache btw, do you see any requests getting cahced / your caching-dir is filling or do you see 100% MISS? maybe: http://forum.nginx.org/read.php?11,163400,163695 do you use cache-control-headerrs?by mex - Nginx Mailing List - English
your ngfinx-config seems ok (except that part that should be deleted from global-section and appear only in event {...} can you test your fastcgi_process with ab (apache benchmark - tool) oder httperf until you reach max_clients w/out reverse_proxying through nginx?by mex - Nginx Mailing List - English
you config is somewhat messed_up, but it think this is not an issue here. are you sure your fastcgi_process is able to deliver more than 520 parallel connections? http://wiki.nginx.org/EventsModule#worker_connections -> to be defined in event {} max clients = worker_processes * worker_connections In a reverse proxy situation, max clients becomes max clients = worker_processesby mex - Nginx Mailing List - English
iirc there is something with the order/length of location $var {} - content. is there an index.php in /home/ian/websites/reseller/htdocs? what is the dir_permissions from /home/ian/websites/reseller/htdocs? what are the file_permissions for /home/ian/websites/coachmaster3dev/htdocs/index.php? what is the dir_permissions home/ian/websites/coachmaster3dev/htdocs? 403 Forbidden is a trustabby mex - Nginx Mailing List - English
you can adjust (proxy)-cache-time in secondsby mex - Nginx Mailing List - English
just a try / not sure if it will work - when starting your nginx try to use a shellscript script that sets http_proxy / https_proxy: export http_proxy=http://server-ip:port/ ; i'm not sure it nginx has some options to use a 3rd proxy. - maybe you can use firewall-rules to do a simple portforwarding to your proxy P, but i'm not sure it will work (for intercepting http-traffic and using squiby mex - Nginx Mailing List - English
yes, you can!by mex - Nginx Mailing List - English
1. aptitude download nginx or browse to the repo with your browser and downlaod manually or create a deb-package via checkinstall from nginx-sources 2. transfer to the target-machine 3. dpkg -i <package>by mex - Nginx Mailing List - English
i'd suggest you'll start with low-level-debugging: - goto host1 and make a tcpdump port 8080 / tail -f against access-logs of that server; - make a request - check., what happens to that request, e.g. where it "hangs" you could also, just in case, make a "tcpdump port 808 and host host2" onm your nginx, just to make sure that nginx is sending the requests to the right sby mex - Nginx Mailing List - English
you are sure, your upstream-servers are not answering on given ports? http://wiki.nginx.org/HttpUpstreamModule#server vs http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout http://wiki.nginx.org/NginxHttpProxyModule#proxy_read_timeoutby mex - Nginx Mailing List - English
you allow 600 seconds to pass until you npotice, that your upstream-server is not responsible. ... max_fails=2 fail_timeout=300s; why?by mex - Nginx Mailing List - English
check_nginx_status is (yet another) Nagios-Plugin to monitor nginx status and alerts on various values, based on HttpStubStatus it also creates, based on the returned values, a csv to store data docs&downloads: http://doxi-news.blogspot.de/2013/06/checknginxstatus-nagios-plugin-to.html (with screenshots) https://bitbucket.org/maresystem/dogtown-nagios-plugins Usageby mex - Nginx Mailing List - English
+1 i use a normal email-client in the office that sorts nginx-ml-mails (and any other ml i'm subscribed to) into folders, but when i'm abroad or in a client's office i'm usually use a webmail-client, and skimming over my mails is much easier with an ; a mail-subject like "Debian Package" could also be from a customer,, while " Debian Package" would kinda be ignored. orby mex - Nginx Mailing List - English
hi, i faced a similar question for a client with a lot of files and found out, after a lot of testing and benchmarking, it nearly doesnt matter if we serve those files from a ram-tmpfs or use the systems os-cache (given, we have plenty of it). so the os (linux) seems to do a good job. the servers were designed to serve static files only. YMMV regards, mexby mex - Nginx Mailing List - English
if you REJECT from iptables you tell the client immediatly that the service/port is not available, otherwise you run into timeouts, yes. i'm not quite sure, but max_fails=3 x fail_timeout=30s == 90 seconds, until your nginx fails over to the other server. regards, mexby mex - Nginx Mailing List - English
mex Wrote: ------------------------------------------------------- > ehlo, > > > one question: do you shutdown all your app-servers or > server-by-server, so you still have a available application? my bad, please read: do you shutdown all your app-servers at once or server-after-server, so you still have a available application?by mex - Nginx Mailing List - English
ehlo, one question: do you shutdown all your app-servers or server-by-server, so you still have a available application? there ist the "down" option for you upstream-block to disable servers, even if they are up, but using this in a dynamic process might get very frickling. whet do you use for iptables-rules? drop/reset? i'd debug your server/app-ports when the iptablby mex - Nginx Mailing List - English
there's a sticky-module (3rd-party), maybe this works out of the box for you. https://code.google.com/p/nginx-sticky-module/wiki/Documentationby mex - Nginx Mailing List - English
> Ok. So the upstream block has to be in the nginx.conf. I thought I > could > this one export to a separate file, too. yes, you can (include your upstream-config and any other part). you just need to place it into the right context, e.g. inside a http { ... } - block and not inside a server { ... } - block http://wiki.nginx.org/HttpUpstreamModule#upstream > > I wasby mex - Nginx Mailing List - English
> Is this possible? yes. depending on your setup it could worth the try to use nginx as static server for your download-files, esp. if you run your proxy_pass - location on on the same server. os-cache can be as fast as a ram-cache via tmpfs. depending on the amount of files in /download and filesize it can be useful to tweak your setup (buffers, sendfile etc) http://wiki.by mex - Nginx Mailing List - English
is your Appserver hostname-aware? your error 500 should come from your appserver; check your logfiles on that part http://wiki.nginx.org/HttpProxyModuleby mex - Nginx Mailing List - English
i made some benchmarks lately and it loks like it doesnt matter for smaller caches, since os-caching is smart enough. if you really want to know just test yourself. regards, mexby mex - Nginx Mailing List - English
man ps | grep RSS man ps | grep VSZ when using a tool like top: real_free = free + cached you might want to try htop for a better continuous display your ram is fine :) regards, mexby mex - Nginx Mailing List - English
can you make sure that anyway.com ist reachable? sounds like the path to your front-lb is somehow not working. i think i dont need to ask for an nginx-restart after config-changes? if you upstream-config is messy or your upstream-servers are unreachable you should usually see a: 502 Bad Gateway regards, mexby mex - Nginx Mailing List - English
you might want to play with your /etc/hosts.by mex - Nginx Mailing List - English
are you able to fetch the given ressource from your nginx-server? e.g. wget https://127.0.0.1:4443/socket.io/socket.io.v0.9.11.js maybe you have an port-issue (4443 vs 443) mexby mex - Nginx Mailing List - English
one quote from that post i can confirm: > nobody has any idea how SSL performance works esp. when it comes to CIPER1 vs CIPHER, compared oin terms of speed and security. what i can suggest to test if your ssl-implementation is stil secure from a cipher-pov is https://www.ssllabs.com/ssltest/ Grant Wrote: ------------------------------------------------------- >by mex - Nginx Mailing List - English