Out of curiosity why would it keep it in TIME_WAIT if it is closing the connection? On Wednesday, January 25, 2012 at 5:14 PM, ggrensteiner wrote: > Have you tried using HTTP 1.1 keepalive connections from nginx to > apache? They became available in 1.1.4 and will re-use sockets rather > then close them and leaving them in TIME_WAIT > > Be sure to remember to turn on keepaliveby ressaid - Nginx Mailing List - English
Thanks for the note and the clever workaround. We were able to tweak this to work with but then it still left a lot of the other functions we were using such as deny/allow, limit_con, ect not working. Instead we went back to amazon and it turns out they were able to correct the behavior of their load balancer. I wanted to report back on the performance of putting nginx behind an elb. We compareby ressaid - Nginx Mailing List - English
As a quick update, it looks like this has happened before. The load balancer bounces off of several internal IP's sometimes and nginx picks the last one only. Does anyone know of a workaround to remove the last two trusted IPs from the x-forwarded-for header? http://forum.nginx.org/read.php?11,26102,214069 Rami On Fri, Nov 25, 2011 at 4:54 PM, Rami Essaid <rami.essaid@gmail.com> wrote: &by ressaid - Nginx Mailing List - English
In looking at the $proxy_add_x_forwarded_for variable I believe that the Loadbalancer is in fact passing the variables but nginx is taking the wrong value? Here is what i get from the variables. $proxy_add_x_forwarded_for: 217.27.244.18, 10.160.43.200, 10.160.43.200 $remote_addr: 10.160.43.200 Does this mean that nginx is taking the last value in the x-forwarded-for? To answer your other questiby ressaid - Nginx Mailing List - English
Thanks for the help Maxim! We disabled our limit_req and that seemed to have fixed the problem. Looking at the logs it seems that only 1/3 of the requests are correctly getting the new IP assigned via the realIP module, the remainder are still logging the load balancer IP. This probably is more of an issue with the amazon load balancer but do you have any idea on what may be going on? Also, whby ressaid - Nginx Mailing List - English
Hi Maxim, We implemented to module and still had some trouble. A lot of the connections would return " 503 Service Temporarily Unavailable". Our configuration works fine without the load balancer but then gives these 503 errors behind the load balancer. Looking into the error logs I notice a lot of these errors both with and without the load balancer "connect() failed (111: Connby ressaid - Nginx Mailing List - English
Thanks Maxim, This looks like exactly what we need. In your experience does this resolve most issues behind a load balancer? On Mon, Nov 21, 2011 at 7:38 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Mon, Nov 21, 2011 at 07:25:39AM -0500, Rami Essaid wrote: > > > Hi Guys, > > > > This weekend for scalability we tried putting our nginx serverby ressaid - Nginx Mailing List - English
Hi Guys, This weekend for scalability we tried putting our nginx servers behind amazon's elastic load balancers and came across a road block: it does not transparently pass the user IP and header information to nginx. This caused issues with several pieces of nginx we use including the IP allow / deny rules, the limit_req module, and the limit_con module. Has anyone successfully put nginx behindby ressaid - Nginx Mailing List - English
Happy birthday Igor. On Wed, Sep 28, 2011 at 8:26 AM, Pascal Nitsche < pascal.nitsche@fansubcode.org> wrote: > Happy Birthday Igor from Mülheim an der Ruhr (Germany). > > Am 28.09.2011 13:35, schrieb Igor Sysoev: > > On Wed, Sep 28, 2011 at 12:20:21PM +0200, Antoine Bonavita wrote: >> >>> Given the amount of times Sep 28th 1970 is mentioned in the nginx codby ressaid - Nginx Mailing List - English
Thanks guys for the suggestions. @calin - I have tried doing that but it messes with the verification page and am not sure how to implement it correctly. @Magicbear - I have put that code in there before the rewrite but then after the rewrite when I try to access that cookie it is not there. I should have been a little more clear on what I have tried and my config. Here is the flow of the rewiby ressaid - Nginx Mailing List - English
I dont know why I am struggling with this so much but I need a bit of help. I am trying to redirect users to a verification page and then back to their original requested URI but because of the rewrite I lose both the $uri and $request_uri variables. I figure the easiest way to solve this problem is to store the original URI in a cookie and read it from that after the verification page. I cantby ressaid - Nginx Mailing List - English
Looks great, thanks! Rami On Wed, Aug 24, 2011 at 11:10 PM, agentzh <agentzh@gmail.com> wrote: > On Thu, Aug 25, 2011 at 10:44 AM, Rami Essaid <rami.essaid@gmail.com> > wrote: > > Hi Guys, > > > > I am trying to use the HttpAdditionModule module but instead of inserting > > content before or after the body, I want to put content within the body. > &gby ressaid - Nginx Mailing List - English
Hi Guys, I am trying to use the HttpAdditionModule module but instead of inserting content before or after the body, I want to put content within the body. Anyone have any ideas on if there any ways to modify the module to accomplish this? -- Cheers, Rami _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby ressaid - Nginx Mailing List - English
I got another similar threat for another Russian who's name was *Igor* Sysoev. He wanted $100,000,000,000!!! Just kidding! Out of serious though there are cloud services to help mitigate against DDoS attacks. I have friends that work for this one: http://www.ultradns.com/Services/SiteProtect which starts at $500 a month. I'd be happy to get you some information. but I think there are others thby ressaid - Nginx Mailing List - English
What's this @ thing mean? @location is a named location. Named locations preserve $uri as it was before entering such location. They were introduced in 0.6.6 and can be reached only via error_page<http://wiki.nginx.org/NginxHttpCoreModule#error_page> , post_action <http://wiki.nginx.org/NginxHttpCoreModule#post_action>(since 0.6.26) and try_files <http://wiki.nginx.org/NginxHttpCorby ressaid - Nginx Mailing List - English
I dont believe you can use limi_req within an if statement. You might have to rewire the if to a different location and then apply the rule: Example: if ( $http_user_agent ~* (?:bot|spider) ) { error_page 403 = @bots; return 403; } location @bots { limit_req zone=antiddosspider burst=1; } GL. Rami On Thu, Aug 11, 2011 at 11:56 AM, rastranoby ressaid - Nginx Mailing List - English
Hi Max, In my opinion you dont want to rely on nginx to do the analytics simply to log suspicious activity but rather need to look at a better log analyzing solution. Have you checked out splunk? It is a very powerful log analyzer that will allow you to more intelligently parse the logs and has a free licence. Rami On Tue, Aug 9, 2011 at 5:17 PM, Maxime Ducharme <max@techboom.com> wroteby ressaid - Nginx Mailing List - English
I am so sorry for the false alarm. When testing nginx I reload the config using the nginx -s reload command. Because I am using a virtual environment my IP address had shift and the config IP mapping was no longer accurate but the reload command did not notify me of the conflict. After killing nginx and starting it again it gave me the proper error code: nginx: bind() to 10.2.10.236:80 faileby ressaid - Nginx Mailing List - English
Hi Guys, Has anyone experienced any issues with the map module using the new 1.1.0 compile of nginx? I am mapping user agents and under 1.0.5 everything worked great, with the new version the exact same config doesnt seem to be working. I've tried two different maps based on previous comments and neither seem to work. Anyone else? http { ..... map $http_user_agent $searchengine { default 0;by ressaid - Nginx Mailing List - English
I am sure there is a better way but here is a suggestion. Use the rate limit to identify when there is more than one connection per minute and redirect that to the cache, if it doesnt hit that limit then you can use the cache_bypass. Not sure if this would work hope someone can confirm, but a thought. limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m; location / { liby ressaid - Nginx Mailing List - English
Reading that article it says: "So… nginx is a good web server, use it! " Their conclusion was that nginx handles that type of attack very well and you would need a DDoS attack (and a large one at that) to bring down a single nginx server. Are there other examples of attacks that you have found that nginx is susceptible to? I have not heard of any specific vulnerabilities of nginx thby ressaid - Nginx Mailing List - English
Thanks for the advise. I guess the only way is to try it and keep an eye on the performance. On Fri, Jul 8, 2011 at 9:31 AM, csg <nginx-forum@nginx.us> wrote: > It just depends what you consider a large configuration. For example we > have a bunch of Ngnix instances running each with roughly 200,000 lines > of configuration, although taking just the raw number of lines as such &gby ressaid - Nginx Mailing List - English
Alexandr, As of right now we are only using the configuration for one of our customers in beta and nginx is not giving any errors. Before we migrate more customers onto the platform we wanted to get a better idea of the hypothetical ramifications of having such a large config file. What are the best practices for config files and what could go wrong if we have a large number of hosts configuredby ressaid - Nginx Mailing List - English
Hi Guys, I am sorry to email this out but I put it on the forum and got nothing back back. I'm hoping someone may be able to give me a little advice. We are trying to use nginx as a reverse proxy for a large number of sites and putting up a lot of rules to direct and traffic for each host. Is there a number of hosts ({server}) that would be too many for a single nginx instance to support? At whby ressaid - Nginx Mailing List - English
We are currently using nginx as a proxy for a large number of sites but putting up a lot of rules to direct and traffic for each host. Is there a number of hosts ({server}) that would be too many for a single nginx instance to support? At what point does the configuration file get too large and start bogging down the nginx server? Currently each host takes up about 150 lines, and we are lookingby ressaid - Other discussion
Hi All, I am trying to restrict frequent requests but not block it completely. I've gotten HttpLimitReqModule to work but this just blocks the requests that exceed the limit. Instead I would like to be able to log it into a table and then forward all such offenders to a recaptcha test (https://github.com/yaoweibin/nginx_http_recaptcha_module autor:weibin yao). My C skills are not what they uby ressaid - Nginx Mailing List - English
Hi Guys, Just starting out on NGiNX so I apologize if this is too basic. I am trying to configure NGiNX to route traffic to coming in to my server to different ports and different services based on the URL coming in. Then I want to take that and based on the user agent append low/med/high to the requested URL. I thought I had it but I keep getting errors when I put the coe in the config filby ressaid - How to...