DOS prevention
J. Wade Michaelis
jwade at userfriendlytech.net
Mon Mar 18 16:49:38 CDT 2013
I really think that I want to go with a simple set of IPTABLES rules, and
here is why:
The only two occasions that the server has hung up were due to DOS attacks
averaging around 8 or 10 requests per second for a minute or more. Also,
hosting this server is not profitable enough to do a lot of work installing
additional packages and configuring and testing them, or subscribing to
additional third-party services.
It appears that limiting the number of connections accepted from a single
IP in 10 seconds (or similar) could have prevented the two attacks I have
seen from bring down the server.
So, that brings me to the following questions:
I found this pair of rules at
http://blog.bodhizazen.net/linux/prevent-dos-with-iptables/comment-page-1/#comment-4524
iptables -v -A INPUT -i eth0 -p tcp –syn –match multiport –dports 80 -m
recent –rcheck –seconds 5 –hitcount 10 –name HTTP -j LOG –log-prefix “HTTP
Rate Limit: ”
iptables -v -A INPUT -i eth0 -p tcp –syn –dport 80 -m recent –update
–seconds 5 –hitcount 10 –name HTTP -j DROP
Do I need any other rules, or can I use just these two (given that the
server is already behind a hardware firewall)? Will they work as is, or
will I need to adjust them? Do I need additional rules for HTTPS traffic,
or can I change "-dports 80" to "-dports 80,443" to achieve that?
Any other advice?
Thanks for all of the suggestions!
~ j.
jwade at userfriendlytech.net
On Mon, Mar 18, 2013 at 4:33 PM, Nathan Cerny <ncerny at gmail.com> wrote:
> Obviously you want to address the attacks, but you could also look into a
> more efficient web server.
>
> Apache is great for the feature-list, but I've had much better performance
> using nginx.
> Granted, my experience is much smaller scale - 5-10 users on a highly
> intensive website.
>
> http://en.wikipedia.org/wiki/Nginx
>
>
> On Mon, Mar 18, 2013 at 4:22 PM, Billy Crook <billycrook at gmail.com> wrote:
>
>> It's easy to /say/ that any modern server should be able to handle a
>> few thousand GET requests.
>>
>> The reality is that a single URI may affect dozens of scripts that you
>> didn't write, which might hit some database as many times, and you
>> can't change them for business or political reasons; even if you are
>> entirely qualified to fix bugs and do performance tuning.
>>
>> When an aggressor, or just some well-intentioned runaway script harps
>> on one of these URIs, your options as an admin are limited.
>>
>> You can throw more hardware at it, put (and maintain) some caching
>> proxy infront of it; or you can throttle the aggressor Fail2ban will
>> help you do the latter, and much more. For instance, it becomes
>> realistic to run ssh on its official port (gasp!) if you use fail2ban
>> to cut down on riff raff.
>>
>> As fail2ban starts blocking the sources of the floods, look over the
>> list of addresses, and see if you can identify a business partner. If
>> you can get them to fix their script, all the better.
>>
>> On Mon, Mar 18, 2013 at 3:45 PM, J. Wade Michaelis
>> <jwade at userfriendlytech.net> wrote:
>> > On Mon, Mar 18, 2013 at 2:58 PM, Mark Hutchings <
>> mark.hutchings at gmail.com>
>> > wrote:
>> >>
>> >> You sure it was just a http attack? Several hundred requests in a few
>> >> minutes shouldnt really put it on it's knees, unless the server is a
>> VPS
>> >> with low memory/CPU usage limits, or the server itself is low on
>> resources.
>> >
>> >
>> > I've gone over my access logs again, and here are the particulars on
>> the two
>> > attacks that caused the server to hang:
>> >
>> > On March 6th, between 4:29:11 and 4:31:40, there were 1453 requests
>> from a
>> > single IP, and all were 'GET' requests for a single page (one that does
>> > exist).
>> >
>> > On March 14th, between 15:15:19 and 15:16:29, there were 575 requests
>> from
>> > the one IP address. These were all different GET requests, nearly all
>> > resulting in 404 errors. Some appear to be WordPress URLs. (The
>> website on
>> > my server is a Magento commerce site.)
>> >
>> > Here are some other example requests from the attack:
>> >
>> > GET /?_SERVER[DOCUMENT_ROOT]=http://google.com/humans.txt? HTTP/1.1
>> > GET /?npage=1&content_dir=http://google.com/humans.txt%00&cmd=lsHTTP/1.1
>> > GET
>> > /A-Blog/navigation/links.php?navigation_start=
>> http://google.com/humans.txt?
>> > HTTP/1.1
>> > GET
>> > /Administration/Includes/deleteUser.php?path_prefix=
>> http://google.com/humans.txt
>> > HTTP/1.1
>> > GET
>> > /BetaBlockModules//Module/Module.php?path_prefix=
>> http://google.com/humans.txt
>> > HTTP/1.1
>> > GET /admin/header.php?loc=http://google.com/humans.txt HTTP/1.1
>> >
>> > I don't recognize most of these, but the pattern indicates to me that
>> these
>> > are most likely 'standard' URLs in various CMSs.
>> >
>> > As for the server configuration, it is a dedicated server (only one
>> website)
>> > running on VMware ESXi 5.0.
>> >
>> > CentOS 6.3
>> > 8 virtual CPU cores (2 quad-core CPUs)
>> > 4096 MB memory
>> >
>> > Other VMs on the same host appeared to be unaffected by the attack.
>> >
>> > Thanks,
>> > ~ j.
>> > jwade at userfriendlytech.net
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > KCLUG mailing list
>> > KCLUG at kclug.org
>> > http://kclug.org/mailman/listinfo/kclug
>> _______________________________________________
>> KCLUG mailing list
>> KCLUG at kclug.org
>> http://kclug.org/mailman/listinfo/kclug
>>
>
>
>
> --
> Nathan Cerny
>
>
> -------------------------------------------------------------------------------
> "I have always wished that my computer would be as easy to use as my
> telephone. My wish has come true. I no longer know how to use my telephone."
> --Bjarne Stroustrup, Danish computer scientist
>
> -------------------------------------------------------------------------------
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://kclug.org/pipermail/kclug/attachments/20130318/111f6b0f/attachment-0001.html>
More information about the KCLUG
mailing list