I’ve never seen this before.
Are you using NTP? Is it possible the date was incorrect when Suri launched and then updated while Suri was executing?
Is it reproducible?
I’ve never seen this before.
Are you using NTP? Is it possible the date was incorrect when Suri launched and then updated while Suri was executing?
Is it reproducible?
NTP - Yes.
Is it possible it was set prior, perhaps, though it’s very early in the init process that ntp
fires.
Is it reproducible - Yes.
root@OpenWrt:/# date -u
Sat Mar 26 16:53:11 UTC 2022
root@OpenWrt:/# service suricata start
root@OpenWrt:/# «LLM7.069001] device eth0 entered promiscuous mode
root@OpenWrt:/# date -u
Sat Mar 26 16:55:40 UTC 2022
root@OpenWrt:/# tail -50 /var/log/suricata/stats.log
decoder.ipv4 | Total | 27
decoder.ipv6 | Total | 3
decoder.ethernet | Total | 56
decoder.udp | Total | 27
decoder.icmpv6 | Total | 3
decoder.avg_pkt_size | Total | 155
decoder.max_pkt_size | Total | 384
flow.udp | Total | 5
flow.icmpv6 | Total | 2
flow.wrk.spare_sync_avg | Total | 100
flow.wrk.spare_sync | Total | 2
decoder.event.ipv6.zero_len_padn | Total | 2
app_layer.flow.failed_udp | Total | 5
flow.mgr.full_hash_pass | Total | 1
flow.spare | Total | 9800
flow.mgr.rows_maxlen | Total | 1
flow.mgr.flows_checked | Total | 1
flow.mgr.flows_notimeout | Total | 1
tcp.memuse | Total | 1212416
tcp.reassembly_memuse | Total | 196608
flow.memuse | Total | 7394304
------------------------------------------------------------------------------------
Date: 3/26/2022 -- 16:55:54 (uptime: -24855d, -3h -14m -8s)
------------------------------------------------------------------------------------
Counter | TM Name | Value
------------------------------------------------------------------------------------
capture.kernel_packets | Total | 65
decoder.pkts | Total | 65
decoder.bytes | Total | 9439
decoder.ipv4 | Total | 30
decoder.ipv6 | Total | 4
decoder.ethernet | Total | 65
decoder.udp | Total | 30
decoder.icmpv6 | Total | 4
decoder.avg_pkt_size | Total | 145
decoder.max_pkt_size | Total | 384
flow.udp | Total | 6
flow.icmpv6 | Total | 2
flow.wrk.spare_sync_avg | Total | 100
flow.wrk.spare_sync | Total | 2
decoder.event.ipv6.zero_len_padn | Total | 2
app_layer.flow.failed_udp | Total | 6
flow.mgr.full_hash_pass | Total | 1
flow.spare | Total | 9800
flow.mgr.rows_maxlen | Total | 1
flow.mgr.flows_checked | Total | 1
flow.mgr.flows_notimeout | Total | 1
tcp.memuse | Total | 1212416
tcp.reassembly_memuse | Total | 196608
flow.memuse | Total | 7394304
root@OpenWrt:/#
I don’t know what practical effect (if any) the time showing off like that might have though. Since it was certainly not intended, I wanted to point it out.
As an update, I did finally get around to running some tests, and suricata6
seems to actually be working, even with all the memcap
calls removed.
root@OpenWrt:~# cat /var/log/suricata/fast.log | grep "Priority: 2"
04/04/2022-03:18:27.367795 [**] [1:2013028:6] ET POLICY curl User-Agent Outbound [**] [Classification: Attempted Information Leak] [Priority: 2] {TCP} 192.168.200.249:43782 -> 99.84.248.78:80
04/04/2022-03:18:27.381182 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 99.84.248.78:80 -> 192.168.200.249:43782
Generated by the curl testing for testmyids.com
Update @Jeff_Lucovsky - This no longer seems an issue in my 6.0.5
build. Whether it was something you all did, something that I changed on my end, or a combination of the two, but I no longer need to comment out memcap
fields.