Implement Suricata-update for multiple servers

We plan to implement suricata at different office locations, around 10 servers for that. for running suricata-update on all, how can we best configure that. should we open internet access for all 10 servers or can pull update from one server and push it to other servers internally? does ‘suricata-update’ command support fetching signatures from internal network?

please help

Suricata-Update has no specific support for multiple machines itself, but it doesn’t do anything to prevent your own method of doing so.

You could give each sensor access to the internet. 10 servers hitting rule servers isn’t much to worry about.

Suricata-Update can pull from internal servers providing they are offering up rules over http(s)… See add-source - Add a source by URL — suricata-update 1.2.1 documentation, where you can provide your own URL.

file:// urls should also work, so you could distribute rules to machines however you want, then use Suricata-Update with a file:///path/to/rules.tar.gz.

I’ve heard of others who just run Suricata-Update on a single machine and push out rules with Ansible or other orchestration tools, which then take care of telling Suricata to reload the rules.

Ultimately its a tool that takes rules from one or more rules and outputs a processed version of that according to your rules.

I hope others chime in on how they are doing this as I know they are out there.

1 Like

Thanks to clarify

being a bit new to Suricata, might be asking some basics as well

I also want to confirm, for running ‘suricata-update’ frequently do we need cron job or is there any other way? and what is recommended interval for rule refresh?

You will need to schedule it yourself, with cron or somehow else. Hourly should be fine… Even daily is probably good enough.

I’ve been setting up some automation with Ansible to push rulesets to sensor based on a inventory file with some simple variables that determine which sensors get the updates. The playbook reaches out to each sensor to collect the suricata version. Then runs suricata-update once for each version. Then pushes the relevant ruleset to the sensors. Finally, it send the kill -USR2 signal to reload the rules.

I have another customer that will be deploying 1000’s of Corelight Software Sensors with Suricata in Docker. For this customer I’m writing a better method. Suricata-update will run on a dedicated host (suricata-update host). It can use a versions file or pole the sensors for versions. Then run suricata-update once for each version. Instead of pushing the rulesets to each sensor, it will store them in a HTTP accessible location by version, with a md5 hash file of the ruleset. Then each sensor will run suricata-update and pull the version specific ruleset from the suricata-update host.

The update.yaml file is very basic. The individual sensors do not have an enable, disable or modify because that was already taken care of. They also do not need to test the ruleset.

I add the following test command so they don’t have to use the --no-test when update runs.
All of the options have environment variables so they can easily be changed within the container by simply passing in an environment variable. At container startup, a simple j2 command runs to convert it to a normal update.yaml file.

For the reload-command, if you run suricata-update and suricata is not running, it will through an error. That’s why I check to see if suricata is running first. I’ve had PATH issues in some environments so I also use which to get the full path of pidof. Then I use pidof to get the process id of suricata.

test-command: {{ UPDATE_TEST_COMMAND|d('echo "Ruleset tested on Suricata-update Host - Not running local tests"') }}
reload-command: {{ UPDATE_RELOAD_COMMAND|d('(! $(which pidof) corelight-suricata) || kill -USR2 $($(which pidof) corelight-suricata)') }}
  - {{ UPDATE_SOURCE|d('http://suricata-update-host/suricata-rulesets/%(__version__)s/suricata.rules') }}

I’m happy to share more details if you want.