Knowledgebase

unsure how to block false GET Requests

Posted by XxUnkn0wnxX, 12-15-2014, 02:43 PM
i seem to be getting a lot of false get requests which is over taxing my server. i believe some one is doing this deliberately to stop people from accessing my site.. now in the past i was able to block such attacks by this in my htaccess file: SetEnvIfNoCase ^User-Agent$ .*(WordPress) HTTP_SAFE_BADBOT Deny from env=HTTP_SAFE_BADBOT back when they used wordpress to DEDOS but now i am getting this: now i know these are false requests so how would one block these?

Posted by zacharooni, 12-15-2014, 02:55 PM
It looks like those are already being blocked (403), and have blank user-agent and referrer fields. You would just need to go through the IPs and block them at the firewall level. For CSF: csf -d 89.96.225.22 Standard iptables: iptables -I INPUT -m tcp -p tcp --dport 80 -s 89.96.225.22 -j DROP

Posted by XxUnkn0wnxX, 12-15-2014, 03:13 PM
won't help because they tend to change the ip every time block one.. i do have a system that auto does this for me but i need to write a new regex for this.. i use fail2ban and flare wall in combination to block simple attacks like these but i am unsure how to write one for this type of attack but i have one written for the bad user agent wordpress i used to get EG: ^ -.*"(GET|POST).*403 209.*"WordPress.*"$ but i am unsure how to make one for the "_" "_" which blank + i am with OVH shouldn't they be blocking such requests?

Posted by ConceptHF, 12-15-2014, 03:20 PM
I suggest you look into Mod_sec. That can help stop a lot of different types of Layer 7 attacks.

Posted by OrcaBox, 12-15-2014, 03:24 PM
Looks like some kind of http flood , I suggest you blocking blank and Wordpress user-agents. I do so with lighttpd

Posted by XxUnkn0wnxX, 12-15-2014, 03:31 PM
yes thats what i want to do block blank user agents / requests but it different layout for apache htaccess config

Posted by XxUnkn0wnxX, 12-15-2014, 03:37 PM
well i found that this is a fix for what i am having but its set to POST not GET i also found this: RewriteCond %{HTTP_USER_AGENT} ^-?$ RewriteRule ^ - [F] This will block blank user-agents and those consisting of a single hyphen, which are "blank user-agent spoofers" -- and even worse than actually-blank ones... and this: RewriteEngine on RewriteCond %{HTTP_USER_AGENT} ^$ [OR] RewriteRule ^.* - [F] but i am unsure which one is the right one for me

Posted by XxUnkn0wnxX, 12-15-2014, 03:53 PM
i tried a few of these and site is still down, seems there is no effect at all..

Posted by XxUnkn0wnxX, 12-15-2014, 03:59 PM
it seems like now they are using random user agents but still using the request GET and they are getting nothing "_" i block one attack the empty user agent and they switch to new attack method... i need to be able to block the "_" or "-" "-" with or without a empty user agent

Posted by XxUnkn0wnxX, 12-15-2014, 05:02 PM
the GET Request to my / home directory would show up as like a normal user trying to access our front page but instead a bot doing it every few seconds non stop.. with & without user agents so how does one block this without blocking legit users?

Posted by hi_tech, 12-15-2014, 05:25 PM
Try this for no useragent: SetEnvIfNoCase User-Agent ^$ HTTP_SAFE_BADBOT

Posted by XxUnkn0wnxX, 12-15-2014, 05:30 PM
yes already has this.. its blocking blank user agents but as soon as i blocked that they started attacking with a different method. using random user agents.. to what it looks like its layer 7 Bot net many many computers attacking the /index.php file with GET Request ip's, User Agents, http codes keep randomising. OVH suppose to block this but they not.. and nothing more i can do without blocking legitimate users. no point blocking one ip at a time if they always come back with another 100 ips in a few min

Posted by hi_tech, 12-15-2014, 06:27 PM
My bad, just re-read the thread. If you have CSF installed try turning on Port Flood Protection and/or LF_APACHE_404 and _403. The first one limits connections to a single port, the second temp bans or perma ban too many 404 or 403s from single IP. The first one needs an IPTables kernel module however and may block legitimate users. CSF can work side by side w/ Fail2ban, just don't enable them for same service (like ssh etc...)

Posted by hi_tech, 12-15-2014, 06:31 PM
Or do something like this w/ Fail2ban only (rate limit the /get requests): Fail2ban rate limit

Posted by XxUnkn0wnxX, 12-15-2014, 06:42 PM
yes i have csf installed.. but the thing is i use cloud flare as a reverse proxy so the ip's would need to be obtained via log files or the http X-Forwarder which CSF would not recognise. the current system i have is fail2ban to pick up DDos attacks from wordpress infected sites then fail2ban bans the IP with CSF and farewell via API bans ip via cloudflare threat control thus blocking the attack from reaching my server.. i was thinking of a rate limiter maybe mod_QOS as it can be configured to use the X-Forward header and cloud flare does provide one.. i also have the mod_cloudflare module installed. but i am still unsure how i would configure mod_QOS or fail2ban rate limit to pick up the X-Forward header containing the ip. and would prefer to set up to only protect particular files like index.php only EG if there are GET to index.php 30+ requests from the one IP for 3-5 min nonstop BAN the IP via CSF then flare wall will take care of the rest.. thats what i wish to setup but unsure how to configure it.. but if done right would be close to a proper layer 7 defence.... via software...

Posted by OrcaBox, 12-15-2014, 06:56 PM
I am just saying apache is bad in such situations , I recommend Lighttpd or hiawatha-webserver. I also suggest you to use cloudflare and enable 'Under Attack' Mode. Regards

Posted by XxUnkn0wnxX, 12-15-2014, 11:22 PM
well as for the web server the way i have everything configured i am kind of dependent on apaches rules & config that i have accustom to.. i was thinking one day upgrading to litespeed which is backwards compatible with apache and all its configurations and rules.. especially the terms used in htaccess which i am heavily depended on now...

Posted by XxUnkn0wnxX, 12-16-2014, 01:40 AM
i have rewritten the regex to best detect the url that i need but.. i don't get like 70+ requests from one ip i none stop.. they do it smart like this ip1 - 4:30:35 ip1 - 4:30:35 ip1 - 4:30:36 ip1 - 4:30:36 ip2 - 4:30:37 ip2 - 4:30:38 ip2 - 4:30:40 ip2 - 4:30:45 ip3 - 4:30:35 ip3 - 4:30:39 ip3 - 4:30:37 and so on they do it in small bursts per an ip but the time frame jus either by 1 second, stays the same or jus 6-7 seconds and these ips repeat non stop for some time and they change to different range of ips and repeat again to get detected by a rate limiter. but they do it enough to total site so i set my rate limiter to 90 requests within 10 seconds from the one ip but no way to configure for this?

Posted by quizknows, 12-16-2014, 06:24 PM
If I'm reading your logs correctly, all of those requests have no user agent and no referring URL. This is very easy to defend with ModSecurity and will alleviate almost all the load from the attack. I generally block POST requests to "/" that have the same attributes (missing UA and referrer) since it's always malicious. This is the rule I use, just change REQUEST_METHOD from POST to GET for your case. This does carry a slight risk of false positive but most legitimate users should have -something- specified as a user agent. I use odd status like 411 so you can put something like this in your .htaccess to stop custom 404 page rendering from creating overhead: If you have CSF with LF_MODSEC or another fail2ban type service, you can block the IPs of repeat offenders automatically. Last edited by quizknows; 12-16-2014 at 06:29 PM.

Posted by zacharooni, 12-16-2014, 06:34 PM
Try something like this to grab the IPs: grep ' 403 202 "-" "-"' /usr/local/apache/domlogs/yourdomain.com | awk '{print $1}' | sort | uniq -c | sort -rn | head -n50 This will print out the top 50 IPs attacking your site. Then you can block them in your firewall. This should help lower the noise at least, or like @VenexCloud stated, use CloudFlare to mitigate the attack (it's free!).

Posted by XxUnkn0wnxX, 12-17-2014, 04:17 AM
yes i have modest but nothing on the rules i believe.. i had set it up when i install cpanel as it installed it for me... i use Cent os 6.5 BTW... as for the no refer yes i noticed that to.. i blocked the no refer using this in my htaccess file: SetEnvIfNoCase Referer "^$" bad_user Deny from env=bad_user and then i could not access my site via direct link via google i think yes but via direct link it didn't work... because when you access a site via direct link in blank new nab ur refer is Nothing... in my case thats how safari and firefox work for me. also how does mod security block/ban IP's? because it needs to read the X-Forwarder for HTTP Header requests to pick up the actual IP. as cloud flare acts as my reverse proxy so the server only picks ups cloud flare ip's / ranges not the actual client IP (i do have mod_cloudflare installed to properly resolve the IP's) but it is only visible to apache and any thing hosted on the server not the server/system itself. Last edited by XxUnkn0wnxX; 12-17-2014 at 04:24 AM.

Posted by quizknows, 12-17-2014, 03:12 PM
The rule I specified has 2 conditions; both no referrer and no user agent. This is necessary because typing a URL results in no referrer (as you noticed) but not no user agent. The rule also is only for the URI "/" and only for the specified method (GET or POST). ModSecurity itself normally only blocks requests, not IP addresses. You can use some advanced rule logic for IP blocking, but I use CSF to do the IP blocking. That is not going to work right if your error logs or access logs only show cloudflare IP addresses. I believe there are modules for Apache to get it to log the IP address of the end user rather than the proxy such as for use with load balancers. Dropping bogus GET requests with ModSecurity, even if you don't block the IPs, is extremely efficient. I've seen servers drop upwards of 100 requests every few seconds and still operate with an extremely low load average.

Posted by XxUnkn0wnxX, 12-17-2014, 11:15 PM
well the ip's are the real client ip addresses in the domain logs not the access logs and i use fail2ban only using those logs. as the attack is clearly picked up there just as well.. as for the blank user agent & refer i am already blocking it with this in my "htaccess" file: so if i place this rule set in mod security: it should block those empty requests like in the config in htaccess? before hitting my web applications IE front page/forums now where it say id:187945987 what is this referring to? i have been doing regex coding for a bit i know that the 411 is a http error/request and after that i see another code similar to the id:187945987 value EG: "GET / HTTP/1.1" 200 130534 "GET / HTTP/1.1" 200 130528 "GET / HTTP/1.1" 200 130407 now i used to be getting IP - - [17/Dec/2014:17:59:15 +1100] "GET / HTTP/1.1" 200 130407 "-" "-" but now its IP - - [17/Dec/2014:17:59:15 +1100] "GET / HTTP/1.1" 200 130407 "-" "random user agent" where random user agent is a real user agent but random each time. the only thing consistent is the blank refer and its always a GET request to the same directory / there is a space before the / and after the / Last edited by XxUnkn0wnxX; 12-17-2014 at 11:23 PM.

Posted by XxUnkn0wnxX, 12-17-2014, 11:45 PM
just to be clear i would place the rule here? http://puu.sh/dzh1I.png - making sure as i want to be sure it works & i don't mess anything up as i haven't had much time experiment with ModSecurity as i saw there where many mod security conf files within my apache conf folder i know for most people there mod security module is called within the http.conf file but for me it there is a mod security conf file that is called when http.conf is loaded and that conf file loads the security_module2 which is ModSec2 + there is ModSec Configuration files such as: modsec2.cpanel.conf modsec2.user.conf.default modsec2.conf - i believe this is the main one that loads the module modsec2.user.conf and i cannot edit the http.conf file as this gets auto generated each time i edit a setting in cpanel or update apache or something similar but there are include files which are loaded up within http.conf which i would be able to place any rule of any module i wish as long have those mod security module tags surround the rules. Last edited by XxUnkn0wnxX; 12-17-2014 at 11:50 PM.

Posted by quizknows, 12-18-2014, 02:56 PM
You can add the rule to modsec2.user.conf, which is where the WHM add custom rule interface would place it. It is fine to use that interface you linked. Be sure to change all instances of "POST" in the rule I provided to "GET". I agree cPanels implementation can be a bit confusing but they are working to add features to make it a bit easier for people. Unfortunately in doing so they've kind of complicated it for people who already know how to configure it. Basically, httpd.conf (which you should never edit on a cpanel box) calls modsec2.conf. Modsec2.conf is populated by EasyApache and also should not need to be edited. You're welcome to edit modsec2.user.conf as needed, which some of the WHM tools do for you; the other WHM tools (like rule whitelisting) use modsec2.cpanel.conf which I believe also does not need to be edited by hand.

Posted by XxUnkn0wnxX, 12-19-2014, 10:23 AM
quizknows i tested the rules they work fine but it only applies to my home directory say i wanted to protect subdirectories or any other file that may become under attack how would i rewrite the Modsec rule to protect them? atm when i go to my site it blocks the request but if i go to site/forums it doesnt block requests with no user agent or reffer.

Posted by quizknows, 12-19-2014, 03:37 PM
This is the line limiting it to the home directory: SecRule REQUEST_URI "^\/$" If you wanted to, you could block all requests that lack both a UA and a Referrer regardless of URI. This does carry some risk of false positives though. If you wanted to do that you simply remove the URI requirement of the rule, so it would look like this instead: If you wanted it to apply to all request methods, including POST requests, you'd make it like this with one less condition: Sadly, there's not a lot you can do about GET requests for "/" if they are spoofing valid looking user agents. The request is going to have the same attributes as a normal visitor that typed in your URL. Last edited by quizknows; 12-19-2014 at 03:44 PM.



Was this answer helpful?

Add to Favourites Add to Favourites

Print this Article Print this Article

Also Read
MySQL failover (Views: 636)


Language:

Client Login

Email

Password

Remember Me

Search