Help understanding a rule

TL;DR - What in the world is GPL ATTACK_RESPONSE command completed and why does it kill HTTP traffic?

I’m setting up a new server. I’m hosting content on multiple protocols. It’s all Open Source content and some of it is quite “big”. Well, it all looked good when I was testing but we quickly discovered a problem. Using the same Linux distro ISO that is 1.8GB in size I was able to test that I can:

  • successfully download over SSH
  • successfully download over HTTPS
  • successfully download over FTP
  • successfully download over rsync

BUT! When trying to pull over HTTP, it gets 781,197,296 bytes in every time and hangs. If I try to resume the download (with something like wget -c or the like), it will just sit there and will not resume. However, it will happily start a new download until it gets just as far and hang again. I CAN complete the download over HTTPS, but NOT over HTTP.

I’m cutting out a heck of a lot of debugging and log tracing but I finally figured it out. This was showing up in my suricata log:
[Drop] [**] [1:2100494:12] GPL ATTACK_RESPONSE command completed [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP}

I traced that back into Emerging Threats ruleset file rules/emerging-attack_response.rules which says this:

alert http $HTTP_SERVERS any -> $EXTERNAL_NET any (msg:"GPL ATTACK_RESPONSE command completed"; flow:established; content:"Command completed"; nocase; reference:bugtraq,1806; classtype:bad-unknown; sid:2100494; rev:12; metadata:created_at 2010_09_23, updated_at 2010_09_23;)

In the past when I’ve had to look up a rule ET had it in their docs but I can’t find that rule anywhere in their documentation. Trying to track down the sid has lead to dead ends and tracing down the bugtraq for 1806 is a Microsoft IIS issue that doesn’t look relevant in the slightest.

The hits I’m getting in online searching aren’t really relevant or helpful so far.

Can anyone help me figure out what is going on? I’d really like to know why this rule is killing valid HTTP traffic.

The rule is fairly trivial. It looks for the “Command completed” string in outbound http response packets. The bugtraq reference seems to be from around year 2000 and the references seem to be lost. My guess is that some webshell or other malware was returning the “Command completed” string after executing commands.

I would recommend having a look at Feedback if you think this is a false positive.

The content of the rule answered all your consideration. This rule only detect/prevent http traffic contains “command complete” so only when you using http to download, this rule will be triggered and block the traffic, you can replace “alert http” to “alert tcp” it will also detect/block the other plain (unencrypted) downloading traffic like FTP / telnet.

Thanks for the replies. I guess I’m not sure what is sending “Command completed” nor why. The file is still mid-download so I find it odd that three webservers all send it at the same time (I tested on lighttpd, apache, and nginx). Any suggestions on what I might be able to do to track it down further? I do feel like this is a false positive, but I don’t feel like I have enough information to show what is going on.
Thanks!

Try dumping a pcap using for example tcpdump of the data transfer and search for “command completed” inside the captured data. My guess is that the file has “command completed” somewhere inside it.