TL;DR - What in the world is GPL ATTACK_RESPONSE command completed
and why does it kill HTTP traffic?
I’m setting up a new server. I’m hosting content on multiple protocols. It’s all Open Source content and some of it is quite “big”. Well, it all looked good when I was testing but we quickly discovered a problem. Using the same Linux distro ISO that is 1.8GB in size I was able to test that I can:
- successfully download over SSH
- successfully download over HTTPS
- successfully download over FTP
- successfully download over rsync
BUT! When trying to pull over HTTP, it gets 781,197,296 bytes in every time and hangs. If I try to resume the download (with something like wget -c or the like), it will just sit there and will not resume. However, it will happily start a new download until it gets just as far and hang again. I CAN complete the download over HTTPS, but NOT over HTTP.
I’m cutting out a heck of a lot of debugging and log tracing but I finally figured it out. This was showing up in my suricata log:
[Drop] [**] [1:2100494:12] GPL ATTACK_RESPONSE command completed [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP}
I traced that back into Emerging Threats ruleset file rules/emerging-attack_response.rules
which says this:
alert http $HTTP_SERVERS any -> $EXTERNAL_NET any (msg:"GPL ATTACK_RESPONSE command completed"; flow:established; content:"Command completed"; nocase; reference:bugtraq,1806; classtype:bad-unknown; sid:2100494; rev:12; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
In the past when I’ve had to look up a rule ET had it in their docs but I can’t find that rule anywhere in their documentation. Trying to track down the sid has lead to dead ends and tracing down the bugtraq for 1806 is a Microsoft IIS issue that doesn’t look relevant in the slightest.
The hits I’m getting in online searching aren’t really relevant or helpful so far.
Can anyone help me figure out what is going on? I’d really like to know why this rule is killing valid HTTP traffic.