I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my other post indicates semrush resides on AWS, so just block AWS. I doubt there is any harm in blocking AWS since no major search engine uses them.
Regarding search engines, the reality is only Google matters. Just look at your logs. That said, I allow Google, yahoo, and Bing. But yahoo/bing isn't even 5% of Google traffic. Everything else I block. Majestic (MJ12) is just ridiculous. I allow the anti-virus companies to poke around, though I can't figure out what exactly their probes accomplish. Often Intel/McAfee just pings the server, perhaps to survey hosting software and revision. Good advertising for nginx!
Original Message
From: Grant
Sent: Wednesday, December 14, 2016 10:30 AM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: limit_req per subnet?
> Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them.
What sort of sites run into problems after doing that? I'm sure some
sites need to allow cloud services to access them. A startup search
engine could be run from such a service.
> If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range.
>
> I try "traffic management" at the firewall first. As I discovered with "deny" in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway.
It sounds like limit_req in nginx does not have any way to do this.
How would you accomplish this in fail2ban?
- Grant
> I recently suffered DoS from a series of 10 sequential IP addresses.
> limit_req would have dealt with the problem if a single IP address had
> been used. Can it be made to work in a situation like this where a
> series of sequential IP addresses are in play? Maybe per subnet?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx