FreeBSD - Suricata IDS stops detection within 3 mins of the service start

I have below setup
FreeBSD - 12.2 (Running on VM with 4 NICs )
Suricata - 6.0.2

Suricata IDS config in rc.conf
suricata_enable=“YES”
suricata_interface=“igb1”
suricata_flags="-D -v"

ifconfig igb1
igb1: flags=28943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,PPROMISC> metric 0 mtu 1500
options=810098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,VLAN_HWFILTER>
ether 00:0c:29:ff:fc:ca
inet 10.10.2.12 netmask 0xff000000 broadcast 10.255.255.255
media: Ethernet autoselect (1000baseT )
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

Suricata PID
sudo ps auxww | grep suricata
root 20168 14.9 1.4 160684 118596 - Ss 12:17 0:17.83 /usr/local/bin/suricata -D -v --pcap=igb1 --pidfile /var/run/suricata.pid -c /usr/local/etc/suricata/suricata.yaml

Once I start the suricata service, I can see the alerts generated in the fast.log file
However, after about 2 to 3 mins, no more alerts are generated in this file.
But the data keeps updating in eve.json and stats.log files.
Please help me on troubleshooting this issue so that the alert detection continues smoothly.

ls -l
-rw-r–r-- 1 root wheel 5236661 May 13 12:20 alert-debug.log
-rw-r----- 1 root wheel 3874597 May 13 12:28 eve.json
-rw-r----- 1 root wheel 8536 May 13 12:20 fast.log
-rw-r----- 1 root wheel 10183 May 13 12:20 http.log
-rw-r----- 1 root wheel 345277 May 13 12:28 stats.log
-rw-r–r-- 1 root wheel 225528 May 13 12:17 suricata.log
-rw-r----- 1 root wheel 27359 May 13 12:21 tls.log

Could you show the content of suricata.log at that moment?

Below is the log at that moment.

13/5/2021 – 12:17:00 - - This is Suricata version 6.0.2 RELEASE running in SYSTEM mode
13/5/2021 – 12:17:00 - - CPUs/cores online: 4
13/5/2021 – 12:17:00 - - Found an MTU of 1500 for ‘igb1’
13/5/2021 – 12:17:00 - - Found an MTU of 1500 for ‘igb1’
13/5/2021 – 12:17:00 - - [ERRCODE: SC_WARN_ERSPAN_CONFIG(329)] - ERSPAN Type I is no longer configurable and it is always enabled; ignoring configuration setting.
13/5/2021 – 12:17:00 - - fast output device (regular) initialized: fast.log
13/5/2021 – 12:17:00 - - eve-log output device (regular) initialized: eve.json
13/5/2021 – 12:17:00 - - http-log output device (regular) initialized: http.log
13/5/2021 – 12:17:00 - - tls-log output device (regular) initialized: tls.log
13/5/2021 – 12:17:00 - - alert-debug output device (regular) initialized: alert-debug.log
13/5/2021 – 12:17:00 - - stats output device (regular) initialized: stats.log
13/5/2021 – 12:17:00 - - Syslog output initialized
13/5/2021 – 12:17:00 - - [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - No output module named drop
13/5/2021 – 12:17:00 - - Running in live mode, activating unix socket
13/5/2021 – 12:17:00 - - 2 rule files processed. 43 rules successfully loaded, 0 rules failed
13/5/2021 – 12:17:00 - - Threshold config parsed: 0 rule(s) found
13/5/2021 – 12:17:00 - - 43 signatures processed. 2 are IP-only rules, 0 are inspecting packet payload, 41 inspect application layer, 0 are decoder event only
13/5/2021 – 12:17:00 - - Using 1 live device(s).
13/5/2021 – 12:17:00 - - using interface igb1
13/5/2021 – 12:17:01 - - running in ‘auto’ checksum mode. Detection of interface state will require 1000ULL packets
13/5/2021 – 12:17:01 - - Found an MTU of 1500 for ‘igb1’
13/5/2021 – 12:17:01 - - Set snaplen to 1524 for ‘igb1’
13/5/2021 – 12:17:01 - - RunModeIdsPcapAutoFp initialised
13/5/2021 – 12:17:01 - - Running in live mode, activating unix socket
13/5/2021 – 12:17:01 - - Using unix socket file ‘/var/run/suricata/suricata-command.socket’
13/5/2021 – 12:17:01 - - all 5 packet processing threads, 4 management threads initialized, engine started.
13/5/2021 – 12:17:11 - - No packets with invalid checksum, assuming checksum offloading is NOT used
13/5/2021 – 13:15:05 - - Signal Received. Stopping engine.
13/5/2021 – 13:16:06 - - [ERRCODE: SC_ERR_FATAL(171)] - Engine unable to disable detect thread - “RX#01-igb1”. Killing engine

That’s nearly 1 hour instead of 3 minutes. I would check other system logs what might have send the Signal

Logs indicate that I stopped the service after 1 hour. Service runs and eve.json file is updated continuously every second, it is just that the alerts stop logging in fast.log file within 3 mins of start. In the first 2-3 mins I can see many logs added in the fast.log file.

Anyway I have another server where this issue is not there. I just wanted to figure out if anyone has faced similar problem and what could be the root cause. I am setting up one more server and will see if I face the same issue again.

Thanks for trying to look into it.

Is it only this one logfile that receives no updates? so stats.log still updates and eve,json as well?

Yes, right.
The files - stats.log and eve.json receive updates but the other logs like the fast.log, http.log are not updated.
I have made sure that I access websites and do pings which will match the rules so that the alerts should generate. So I am sure that the alerts should be generate but they aren’t.

But you see those (http) events and alerts in eve.json in the same time?

Yes, I see them.
I see that the events are logged in eve.json but not added to fast.log.

Hi,

I just registered to say that I have the exact same problem. I’m also on FreeBSD 12.2-RELEASE-p7, with the igb driver, but also on lo0 I seem to get this issue.

Also: stopping (signal handling) is immediate as long as there has been no detection hit, it seems, or at least for a while. Otherwise, it takes a bit of time before the “Engine unable to disable detect thread” message comes along with ctrl+C on a foreground process. Kill on a daemonized process seems to lead nowhere.

Reverting to suricata 5.0.6 in the same jail, same configuration and rules: Everything works. And ctrl+C on a foregrounded process immediately returns:

...
1/6/2021 -- 21:52:21 - <Config> - AutoFP mode using "Hash" flow load balancer
1/6/2021 -- 21:52:21 - <Info> - Using 1 live device(s).
1/6/2021 -- 21:52:21 - <Info> - using interface igb1
1/6/2021 -- 21:52:21 - <Perf> - igb1: disabling txcsum offloading
1/6/2021 -- 21:52:21 - <Perf> - igb1: disabling txcsum6 offloading
1/6/2021 -- 21:52:21 - <Error> - [ERRCODE: SC_ERR_SYSCALL(50)] - Unable to set caps for iface "igb1": Operation not permitted
1/6/2021 -- 21:52:21 - <Info> - running in 'auto' checksum mode. Detection of interface state will require 1000ULL packets
1/6/2021 -- 21:52:21 - <Info> - Found an MTU of 1500 for 'igb1'
1/6/2021 -- 21:52:21 - <Info> - Set snaplen to 1524 for 'igb1'
1/6/2021 -- 21:52:21 - <Info> - RunModeIdsPcapAutoFp initialised
1/6/2021 -- 21:52:21 - <Config> - using 1 flow manager threads
1/6/2021 -- 21:52:21 - <Config> - using 1 flow recycler threads
1/6/2021 -- 21:52:21 - <Info> - Running in live mode, activating unix socket
1/6/2021 -- 21:52:21 - <Info> - Using unix socket file '/var/run/suricata/suricata-command.socket'
1/6/2021 -- 21:52:21 - <Info> - Created socket directory /var/run/suricata/
1/6/2021 -- 21:52:21 - <Notice> - all 9 packet processing threads, 4 management threads initialized, engine started.
1/6/2021 -- 21:55:09 - <Info> - Less than 1/10th of packets have an invalid checksum, assuming checksum offloading is NOT used (40/1000)
type or paste code here

kind regards,
Jeroen

@jeroenp Thanks for confirming the issue and workaround about downgrading the version.
I can also confirm that on 5.x it is working fine but the 6+ version has this issue.
Anytime later if you find the solution for 6+ version then please post it here. It will be of big help to me and others facing this weird version specific issue and have no clues anywhere on the solution.

I would encourage you to submit a detailed (based on the forum post) bugreport in our redmine: https://redmine.openinfosecfoundation.org/ especially if it’s reproducible between 5.0.x and 6.0.x there should be a way to track it down.

I am seeing this same behavior with Suricata 6.0.2 on pfSense (which is based on FreeBSD-12.2/STABLE). I have opened a bug report on Redmine back on May 6, 2021: https://redmine.openinfosecfoundation.org/issues/4478. And when running the interface in IPS mode using netmap between the host stack and a NIC, all traffic on the interface stops. Nothing passes, and even stranger, nothing is logged anywhere (not in Suricata’s log, not to the console, and not to the system log). The exact same configuration on the exact same machine works fine with Suricata 5.0.6 (or any 5.x Suricata binary).