I'm sending 403 responses now, so I screwed up by mistaking the fields in the logs. I'm going back to lurking mode again with my tail shamefully between my legs. This code in the image location section will block the google app: ------------ if ($http_user_agent ~* (com.google.GoogleMobile)) { return 403; } --------- 403 107.2.5.162 - - [21/Jun/2017:07:21:08 +0000]by gariac - Nginx Mailing List - English
Actually I think I was mistaken and the field is the user agent. I will change the variable and see what happens. I did some experiments to show the pattern match works. On Tue, 20 Jun 2017 20:56:46 -0700 lists@lazygranch.com wrote: > I want to block by referrer. I provided a more "normal" record so > that the user agent and referrer location was obvious by context. > >by gariac - Nginx Mailing List - English
I want to block by referrer. I provided a more "normal" record so that the user agent and referrer location was obvious by context. My problem is I'm not creating the match expression correctly. I've tried spaces, parens. I haven't tried quotes. ‎  Original Message  From: Robert Paprocki Sent: Tuesday, June 20, 2017 6:47 PM To: nginx@nginx.org Reply To: nginx@nginx.org Subjecby gariac - Nginx Mailing List - English
I think the ipad is the useragent. I wiped out that access.log, but here is a fresh one showing a browser (user agent) in the proper field. 200 76.20.227.211 - - [21/Jun/2017:00:48:45 +0000] "GET /images/photo.jpg HTTP/1.1" 91223 "http://www.mydomain.com/page.html" "Mozilla/5.0 (Linux; Android 6.0.1; SM-T350 B uild/MMB29M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.by gariac - Nginx Mailing List - English
I would like to block the google app from directly downloading images. access.log: 200 186.155.157.9 - - [20/Jun/2017:00:35:47 +0000] "GET /images/photo.jpg HTTP/1.1" 334052 "-" "com.google.GoogleMobile/28.0.0 iPad/9.3.5 hw/iPad2_5" "-" My nginx code in the images location: if ($http_referer ~* (com.google.GoogleMobile)) { return 403; } So whaby gariac - Nginx Mailing List - English
If the secret page is on a different subdomain, could it be restricted to one IP? _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby gariac - Nginx Mailing List - English
‎I suppose I'm stating the obvious, but if you are going to implement blocking schemes with either simple map matches or a full blown WAP like Naxsi, you will need a test suite. For a very simple website, you can just crawl it with wget and see what you broke. But if you have forms, databases, etc. you probably will have to resort to Selenium. And that just checks if you broke something, not ifby gariac - Nginx Mailing List - English
Here is the map. I truncated my bad agent list, but will get you started. I used the user agent changer in Chromium to make sure it worked. --------------------------------------------------------- map $http_user_agent $badagent { default 0; ~*WordPress 1; ~*kscan 1; ~*ache 1; }by gariac - Nginx Mailing List - English
I had run Naxsi with Doxi. Trouble is when it cause problems, it was really hard to figure out what rule was the problem. I suppose if you knew what each rule did, Naxsi would be fine. That said, my websites are so unsophisticated that it is far easier for me just to use maps. Case in point. When all this adobe struts hacking started, I noticed lots of 404s with the word "action" iby gariac - Nginx Mailing List - English
Reading a blog from the person that set up the website for Emmanuel Macron, I came across this nginx tip. I would return 444 and add it to my user agent map. But in the simplest form: --------- # Block WordPress Pingback DDoS attacks     if ($http_user_agent ~* "WordPress") {       return 403;     } ------- The conf file: https://github.com/EnMarche/en-marchby gariac - Nginx Mailing List - English
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby gariac - Nginx Mailing List - English
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby gariac - Nginx Mailing List - English
A bit OT, but can a guru verify I rejected all these proxy attempts. I'm 99.9% sure, but I'd hate to allow some spammer or worse to route through my server. The only edit I made is when they ran my IP address though a forum spam checker. (I assume google indexes pastebin.) https://pastebin.com/VCg28AZf Pastebin made me captcha because they thought I was a spammer. ;-) _________________________by gariac - Nginx Mailing List - English
You would probably want to also limit the number of connections per IP address, else one IP could lock up the entire site.  Original Message  From: Valentin V. Bartenev Sent: Tuesday, April 4, 2017 1:58 PM To: nginx@nginx.org Reply To: nginx@nginx.org Subject: Re: Limit number of connections to server On Tuesday 04 April 2017 17:22:58 Kamil Gorlo wrote: > Hi, > > is there a way toby gariac - Nginx Mailing List - English
‎FYI, benchmark mentioned in the video. https://github.com/wg/wrk Wouldn't a number of test machine ls on the Internet make more sense than flogging nginx locally on your network? With VPS time being sold by the hour, seems to me you should get one VPS tester running acceptably, then clone a dozen and do your test. With SSD based VPS, you can literally clone one a minute.  Original Message Âby gariac - Nginx Mailing List - English
Are you trying to block baiduspider from your html email? I think you should review the commented out lines. Very old school, but you may want to just print your conf file and line up curly braces. Perhaps copy the conf file, delete commented lines, and then see if it makes sense.  It looks to me like the conf file can't be parsed due to mismatches.  Original Message  From: xstation Sent:by gariac - Nginx Mailing List - English
‎Take a look at this: ‎http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html Personally, I would use the map feature since eventually there will be other user agents to block. I use three maps. I block based on requests, referrals, and ‎user agents. The user agent is kind of obvious. Unwanted referrals is a personal thing. I find some websites linking to me that are pureby gariac - Nginx Mailing List - English
Here is my philosophy. A packet arrives at your server. This can be broken down into two parts: who are you and what do you want. The firewall does a fine job of stopping the hacker at the who are you point. When the packet reaches Nginx, the what do you want part comes into play. Most likely nginx will reject it. But all software has bugs, and thus there will be zero days. Thus I rather stop tby gariac - Nginx Mailing List - English
This is an interesting bit of code. However if you are being ddos-ed, this just eliminates nginx from replying. It isn't like nginx is isolated from the attack. I would still rather block the IP at the firewall and prevent nginx fr‎om doing any action. The use of $bot_agent opens up a lot of possibilities of the value can be fed to the log file.  Original Message  From: shiz Sent: Wednesdby gariac - Nginx Mailing List - English
By the time you get to UA, nginx has done a lot of work. You could 444 based on UA, then read that code in the log file with fail2ban or a clever script. ‎That way you can block them at the firewall. It won't help immediately with the sequential number, but that really won't be a problem.  Original Message  From: Grant Sent: Wednesday, December 14, 2016 2:15 PM To: nginx@nginx.org Repby gariac - Nginx Mailing List - English
I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my other post indicates semrush resides on AWS, so just block AWS. I doubt there is any harm in blocking AWS since no major search engine uses them. Regarding search engines, the reality is only Google matters. Just look at your logs. That said, I allow Google, yahoo, and Bing. But yahoo/bing isn't even 5% of Google traffic.by gariac - Nginx Mailing List - English
‎They claim to obey robots.txt. They also claim to to use consecutive IP addresses. https://www.semrush.com/bot/ ‎ Some dated posts (2011) indicate semrush uses AWS. I block all of AWS IP space and can say I've never seen a semrush bot. So that might be a solution. I got the AWS IP space from some Amazon Web page. I get a bit of kick back about blocking things that are not eyeballs like coby gariac - Nginx Mailing List - English
That attack wasn't very distributed. ;-) Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. Â At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them. If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not bby gariac - Nginx Mailing List - English
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby gariac - Nginx Mailing List - English
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby gariac - Nginx Mailing List - English
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby gariac - Nginx Mailing List - English
You can block some of those bots at the firewall permanently.  I use the nginx map feature in a similar manner, but I don't know if map is more efficient than your code. ‎I started out blocking similar to your scheme, but the map feature looks clear to me in the conf file. Majestic and Sogou sure are annoying. For what I block, I use 444 rather than 403. (And yes, I know that destroys the maby gariac - Nginx Mailing List - English
I keep my nginx server set up dumb. (Don't need anything fancy at the moment). Is this request below possibly valid? I flag anything with a question mark in it as hacking, but maybe IOS makes some requests that some websites will process, and others would just ignore after the question mark. 444 72.49.13.171 - - [14/Nov/2016:06:55:52 +0000] "GET /ttr.htm?sa=X&sqi=2&ved=0ahUKEwiB7Nyjby gariac - Nginx Mailing List - English
‎Makes perfect sense! ‎  Original Message  From: Maxim Dounin Sent: Wednesday, November 9, 2016 2:02 AM To: nginx@nginx.org Reply To: nginx@nginx.org Subject: Re: Unexptected return code Hello! On Tue, Nov 08, 2016 at 11:27:36PM -0800, lists@lazygranch.com wrote: > I only serve static pages, hence I have this in my conf file: > > ----------------------- > ## Only allow tby gariac - Nginx Mailing List - English
I only serve static pages, hence I have this in my conf file: ----------------------- ## Only allow these request methods ## if ($request_method !~ ^(GET|HEAD)$ ) { return 444; } ---------------- Shouldn't the return code be 444 instead of 400? ---------------------------------------- 400 111.91.67.118 - - [09/Nov/2016:05:18:38 +0000] "CONNECT search.yahoo.com:443 HTTP/1.1by gariac - Nginx Mailing List - English
![]() |
![]() |
![]() |
![]() |
![]() |