By the way, it's getting worse with time.... Total php requests with status 200 since today 6:00 AM: 3708. Of those, 2292 returned status 200 and received a RST from nginx load balancer, so more than half of it are getting caught by this issue... Will try to get the original logs to see if there's some light there... Thank you! Guzman On Thu, Aug 9, 2012 at 4:07 PM, Guzmán Brasó <guzmaby valor - Nginx Mailing List - English
Hi Maxim!, once again thank you... Exactly what I thought.. but something doesn't make sense, the owner of the site put some paid traffic on it and I now numbers are bigger than before, it could not be that hundreds of people abort the connection exactly and precisely at the same byte. It's always the same byte, and it's always random thought I've never been able to reproduce it myself. Just snifby valor - Nginx Mailing List - English
Hi Maxim! Thanks for taking time to check it out... So the 499 seen by the php-fpm nginx here It's not that main nginx closed the connection but that fastcgi closed the connection? All the time thought was nothing to do with the backend... there's no php warning or error on the php-fpm side when this happens, will try to enable debug mode in php-fpm and swim around the logs.... Thanks! >by valor - Nginx Mailing List - English
Hi, Chroot is pretty straight forward in php, php takes care of make everything transparent and available inside chroot without having to place a file for everything as you have to with an usual chroot. Anyway some are not straight forward, you should use ldd command to see all required libraries for a given binary that php does not automagically allow you to use inside chroot. In my case the wby valor - Php-fpm Mailing List - English
Hi... Just made myself of a dump from the backend perspective and analized with wireshark to see what were on those bytes, I was pretty sure those bytes were some error from the app with the key to fix it.... but no, those 15776 bytes are always 15776 but the first 15776 of every request, so every request is different. According to tcpdump everything flows OK and after one common ack from upstreby valor - Nginx Mailing List - English
Hi list! I've a weid issue I've been banging my head without been able to understand what's going on. Setup is as follow: - 1 nginx 0.7.67 doing load balance to two backends. - backend 1 with Nginx 1.2.2 and php-fpm 5.3 - backend 2 with Nginx 0.7.67 and php-fpm 5.3 Some, and only some requests log in the upstream status 200 and 0 byte returned. Same request in the backend log shows a 200 statusby valor - Nginx Mailing List - English
Hello there, As limit_req returns 503 as well other 503 (service unavailable) native errors I need a way to differentiate them in the access log. One easy way would be to be able to set the status limit_req returns, would be that possible? The main reason I'm asking this it's because we analyze our logs in real time to alert on certain values, until limit_req I used to hsend alerts if more thaby valor - Nginx Mailing List - English
Hi... I saw the picture but don't see any problem for nginx there, your problem in that picture seems to be that you want your backend to access crossdomain server but your backend do not have internet acces. If that's the case, and you don't need to support thousands of cross domains, you can configure Nginx to be reverse proxy of cross domain server and use it to access internet from your backby valor - Nginx Mailing List - English
That's just great, thank you! Then I can load the db into each worker and reload db as I wish/need. Thank you! (again) Guzman _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby valor - Nginx Mailing List - English
Hello Alexander, I've one simple question, how is performance affected when opening a small text file (a small database). I know if the file is small linux itself will cache into memory the file, but I'm worried about how it will affect each url request to open the file, even from memory reading again the whole file, then querying, then closing the handler, etc. All in vain because this file wonby valor - Nginx Mailing List - English
Hi Everyone, I just spent a lot of time fighting with a misterious problem with the cache in one of our nginx servers. Problem was found and thought it would be nice to share what happened with everyone else so no one get into the same hole I went. We have a file with all default proxy configuration which we include in all our proxy_cache setups, then we only use the proxy directives we need toby valor - Nginx Mailing List - English
Also it may be your application & not nginx giving the 302 when users request /welcome/ At least that /welcome/ comes with a some application headers like X-Pingback & X-Powered-By. I would suspect your nginx is working as expected, it's your application not working correctly when /welcome/ is requested. As it was said send your config if you need help On Tue, Mar 1, 2011 at 6:18 PM, Gby valor - Nginx Mailing List - English
Just checked... >From here I see a 302 redirect from the home to /welcome/ together with setup of three cookies "PHP_SESSION", "bp-message" & "bp-message-type". Then /welcome/ gives a 302 to the home, together with set-cookie of "bp-message" & "bp-message-type". And again and again and again and again... On Tue, Mar 1, 2011 at 6:01 Pby valor - Nginx Mailing List - English
Hi there, We are currently using wordpress (many of them) running in apache with nginx in front of it using proxy_cache. Currently we are not managing anymore the wordpress and our customers do with them as they wish, some have cache enabled, some not. We wanted to use that module but as our customer have many different versions of wordpress it didn't fit for us. Right now we cache every staticby valor - Nginx Mailing List - English
Hello Jeff... On Fri, Jan 28, 2011 at 3:08 AM, Jeff Mitchell <jeff@jefferai.org> wrote: > Hello, > > We have a particular application (gitweb) that performs a particular, > extraordinarily slow function when the home page is loaded. As the > number of repositories has increased, this has grown to take several > *minutes* per page view (yes, ugh). > > To combat this,by valor - Nginx Mailing List - English
Great idea! Good luck with it! On Fri, Jan 21, 2011 at 8:56 AM, Valery Kholodkov < valery+nginxen@grid.net.ru <valery%2Bnginxen@grid.net.ru>> wrote: > > Greetings! > > Nginx is growing, people are becoming more curious about it's internals. > > During the last 4 years I've been on a fantastic journey into programming > for Nginx, studying how things are working iby valor - Nginx Mailing List - English
Hi! May I ask which type of instance where you using in EC2 ? Thanks, Guzman On Sat, Dec 11, 2010 at 2:53 PM, Chetan Sarva <csarva@gmail.com> wrote: > On Sat, Dec 11, 2010 at 9:12 AM, Dennis Jacobfeuerborn > <dennisml@conversis.de> wrote: > >> I don't have any direct experience with services like EC2 but remember that >> before the traffic hits your machine itby valor - Nginx Mailing List - English
Hi! Where you saw the example? Can you post the link? I'm not using that option but hosting thousands of wordpress behind nginx so I would like to check it out. Cheers Guzman On Wed, Dec 1, 2010 at 1:12 AM, Ian M. Evans <ianevans@digitalhit.com> wrote: > I saw this example for setting up nginx to run wordpress and wp-super cache > > server_name _wordpress-cache mydomain.com; &by valor - Nginx Mailing List - English
Hi, One idea... With nginx proxy_cache you can use "proxy_cache_use_stale" to tell it to return the last known copy of a cached file on given backend errors. If you do that for 404 error for css_*.css & js_*.js, it means that cached users will no longer be redirected to non existent css, instead they will be loading the css the page was using when it was cached. Read more here: htby valor - Nginx Mailing List - English
Sweet! Every day I love nginx more! Do you know if it add too much load on it to check every returned html content? Whenever this module allow regular expressions (right now says there it accepts only variables) and allow to redirect user on match to another page, this would be a nice replacement for apache2 mod_security body parse feature. Thanks for the info, as I said I'm still a newbie withby valor - Nginx Mailing List - English
Hi! I'm not a nginx developer and I've not even seen the source yet, however I've nginx running on cache and I can reply by experience one of your questions. As long as you have free memory and your os (mine is debian too) correctly uses that memory to cache, you won't see a single IO Read into the cache files by nginx. You will see the first io read to the cached files whenever you have noby valor - Nginx Mailing List - English
Hi As far as I know body content rewriting it's not possible with nginx, I may be wrong, i'm a newbie with nginx. What nginx supports is URL rewriting through HttpRewrite module and this has nothing to do with returned body content rewriting. URL Rewriting works when nginx receives a requests, it parses the URL requested and rewrite as appropiate according to your rules. At this point theby valor - Nginx Mailing List - English
Hello Everyone, First pardon me if this have already been discussed in the russian mailinglist. Second, thank you igor for this wonderful software, it's amazing speed and stability. After googling and searching around I've not found a single page with information related to take advantage of nginx cache logging and tools to parse this data. So decided to create this topic to first check whaby valor - Nginx Mailing List - English