Receiving several streams errors with Suricata 5.0.3

Today, I have updated my FreeBSD 12.1 (fully updated) host with Suricata 5.0.3. After that, I have enabled anomaly option and I am receiving a lot of entries like this:

{“timestamp”:“2020-05-05T07:14:02.301024+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.4”,“src_port”:49394,“dest_ip”:“172.22.55.5”,“dest_port”:443,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.pkt_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.301024+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.4”,“src_port”:49394,“dest_ip”:“172.22.55.5”,“dest_port”:443,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.est_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.307457+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.5”,“src_port”:443,“dest_ip”:“172.22.55.4”,“dest_port”:49394,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.pkt_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.307457+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.5”,“src_port”:443,“dest_ip”:“172.22.55.4”,“dest_port”:49394,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.est_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.307872+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.4”,“src_port”:49394,“dest_ip”:“172.22.55.5”,“dest_port”:443,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.pkt_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.307872+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.4”,“src_port”:49394,“dest_ip”:“172.22.55.5”,“dest_port”:443,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.est_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.454401+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.5”,“src_port”:443,“dest_ip”:“172.22.55.4”,“dest_port”:49394,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.pkt_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.454401+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.5”,“src_port”:443,“dest_ip”:“172.22.55.4”,“dest_port”:49394,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.est_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.455095+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.4”,“src_port”:49394,“dest_ip”:“172.22.55.5”,“dest_port”:443,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.pkt_invalid_ack”}}

{“timestamp”:“2020-05-05T07:14:02.455095+0000”,“flow_id”:608287902755297,“in_iface”:“vtnet2”,“event_type”:“anomaly”,“src_ip”:“172.22.55.4”,“src_port”:49394,“dest_ip”:“172.22.55.5”,“dest_port”:443,“proto”:“TCP”,“community_id”:“1:+WREAUJoDuoz9NdiHyesC68d1JU=”,“anomaly”:{“type”:“stream”,“event”:“stream.est_invalid_ack”}}

I am using netmap capture in Suricata … My config for interfaces is pretty simple:

netmap:

  • interface: vtnet2

    checksum-checks: no

  • interface: vtnet3

    checksum-checks: no

  • interface: vtnet4

    checksum-checks: no

  • interface: vtnet5

    checksum-checks: no

  • interface: vtnet6

    checksum-checks: no

An I have disabled all off-loading options in network interfaces … Any idea?

These are usually the result of some packet capture issue like packet loss, unbalanced vlan tagging, mtu issues, etc. Can you share a dump of your stats.log in here?

Of course, here it is:


Date: 5/5/2020 – 13:15:45 (uptime: 0d, 06h 17m 07s)

Counter | TM Name | Value

capture.kernel_packets | Total | 2043984
decoder.pkts | Total | 2043984
decoder.bytes | Total | 1299107791
decoder.ipv4 | Total | 2043984
decoder.ethernet | Total | 2043984
decoder.tcp | Total | 1168668
decoder.udp | Total | 874594
decoder.icmpv4 | Total | 327
decoder.avg_pkt_size | Total | 635
decoder.max_pkt_size | Total | 1514
flow.tcp | Total | 8475
flow.udp | Total | 3718
flow.icmpv4 | Total | 75
defrag.ipv4.fragments | Total | 8
decoder.event.ipv4.opt_pad_required | Total | 387
tcp.sessions | Total | 8274
tcp.pseudo | Total | 4
tcp.invalid_checksum | Total | 1
tcp.syn | Total | 8465
tcp.synack | Total | 8512
tcp.rst | Total | 2061
tcp.pkt_on_wrong_thread | Total | 145668
tcp.stream_depth_reached | Total | 6
tcp.reassembly_gap | Total | 7
tcp.overlap | Total | 3742
detect.alert | Total | 76
detect.nonmpm_list | Total | 1042
detect.fnonmpm_list | Total | 871
detect.match_list | Total | 872
app_layer.flow.http | Total | 79
app_layer.tx.http | Total | 111
app_layer.flow.smtp | Total | 2
app_layer.tx.smtp | Total | 2
app_layer.flow.tls | Total | 8108
app_layer.flow.ssh | Total | 15
app_layer.flow.ntp | Total | 1346
app_layer.tx.ntp | Total | 1605
app_layer.flow.dhcp | Total | 23
app_layer.tx.dhcp | Total | 314
app_layer.flow.failed_tcp | Total | 4
app_layer.flow.dns_udp | Total | 2078
app_layer.tx.dns_udp | Total | 8940
app_layer.flow.failed_udp | Total | 271
flow_mgr.closed_pruned | Total | 8021
flow_mgr.new_pruned | Total | 584
flow_mgr.est_pruned | Total | 3596
flow.spare | Total | 10000
flow_mgr.flows_checked | Total | 3
flow_mgr.flows_notimeout | Total | 3
flow_mgr.rows_checked | Total | 65536
flow_mgr.rows_skipped | Total | 65533
flow_mgr.rows_maxlen | Total | 1
tcp.memuse | Total | 2867200
tcp.reassembly_memuse | Total | 514048
flow.memuse | Total | 7174136

Especially the first one worries me here. It seems that load balancing isn’t working very well. See this ticket for more info: Optimization #2725: stream/packet on wrong thread - Suricata - Open Information Security Foundation

Unfortunately I think there are very few options on FreeBSD/Netmap to address this. Where Linux exposes a lot of options to control hashing in various places, last time I looked FreeBSD didn’t have these.

Maybe you can try forcing a single thread per interface?

Hi Victor,

I have enabled only one interface with one thread assigned. And stats.logs shows me:


Date: 5/6/2020 – 07:22:35 (uptime: 0d, 00h 19m 09s)

Counter | TM Name | Value

capture.kernel_packets | Total | 77837
decoder.pkts | Total | 77837
decoder.bytes | Total | 50772417
decoder.ipv4 | Total | 77837
decoder.ethernet | Total | 77837
decoder.tcp | Total | 76008
decoder.udp | Total | 1713
decoder.icmpv4 | Total | 99
decoder.avg_pkt_size | Total | 652
decoder.max_pkt_size | Total | 1494
flow.tcp | Total | 1243
flow.udp | Total | 520
flow.icmpv4 | Total | 4
decoder.event.ipv4.opt_pad_required | Total | 17
tcp.sessions | Total | 1177
tcp.syn | Total | 1621
tcp.synack | Total | 1102
tcp.rst | Total | 216
tcp.reassembly_gap | Total | 1
tcp.overlap | Total | 1
detect.alert | Total | 4
detect.nonmpm_list | Total | 1842
detect.fnonmpm_list | Total | 1693
detect.match_list | Total | 1694
app_layer.flow.http | Total | 8
app_layer.tx.http | Total | 17
app_layer.flow.tls | Total | 1037
app_layer.flow.ssh | Total | 1
app_layer.flow.ntp | Total | 196
app_layer.tx.ntp | Total | 228
app_layer.flow.dhcp | Total | 4
app_layer.tx.dhcp | Total | 12
app_layer.flow.dns_udp | Total | 286
app_layer.tx.dns_udp | Total | 1146
app_layer.flow.krb5_udp | Total | 4
app_layer.tx.krb5_udp | Total | 4
app_layer.flow.failed_udp | Total | 30
flow_mgr.closed_pruned | Total | 1005
flow_mgr.new_pruned | Total | 201
flow_mgr.est_pruned | Total | 395
flow.spare | Total | 10000
flow_mgr.flows_checked | Total | 4
flow_mgr.flows_notimeout | Total | 4
flow_mgr.rows_checked | Total | 65536
flow_mgr.rows_skipped | Total | 65526
flow_mgr.rows_empty | Total | 6
flow_mgr.rows_maxlen | Total | 1
tcp.memuse | Total | 573440
tcp.reassembly_memuse | Total | 110592
http.memuse | Total | 192
flow.memuse | Total | 7203440

On th other hand, I am sending all anomalies to a Splunk server. I have attached a screenshot. As you can see, there are a lot of streams alerts in a few time (15 min),

Please, any news about this issue? Maybe is a config problem under FreeBSD?

The wrong_thread issue seems to be gone that way, but there seems to be some packet loss still (see the reassembly_gap counter).

I’d be happy to look at a pcap from a tcp session that gives these events.

I will try to capture some traffic tomorrow at morning … In the meanwhile, maybe these statistics are best:


Date: 5/11/2020 – 15:42:50 (uptime: 0d, 01h 33m 01s)

Counter | TM Name | Value

capture.kernel_packets | Total | 438631
decoder.pkts | Total | 438631
decoder.bytes | Total | 193878341
decoder.ipv4 | Total | 438631
decoder.ethernet | Total | 438631
decoder.tcp | Total | 431924
decoder.udp | Total | 6016
decoder.icmpv4 | Total | 589
decoder.avg_pkt_size | Total | 442
decoder.max_pkt_size | Total | 1514
flow.tcp | Total | 5668
flow.udp | Total | 1402
flow.icmpv4 | Total | 25
defrag.ipv4.fragments | Total | 3
decoder.event.ipv4.opt_pad_required | Total | 99
tcp.sessions | Total | 5522
tcp.syn | Total | 5589
tcp.synack | Total | 5522
tcp.rst | Total | 454
tcp.stream_depth_reached | Total | 5
tcp.reassembly_gap | Total | 2
tcp.overlap | Total | 38
detect.alert | Total | 84
detect.mpm_list | Total | 2
detect.nonmpm_list | Total | 1762
detect.fnonmpm_list | Total | 1426
detect.match_list | Total | 1427
app_layer.flow.http | Total | 40
app_layer.tx.http | Total | 49
app_layer.flow.tls | Total | 5420
app_layer.flow.ssh | Total | 12
app_layer.flow.dns_tcp | Total | 2
app_layer.tx.dns_tcp | Total | 6
app_layer.flow.ntp | Total | 174
app_layer.tx.ntp | Total | 174
app_layer.flow.dhcp | Total | 14
app_layer.tx.dhcp | Total | 80
app_layer.flow.failed_tcp | Total | 1
app_layer.flow.dns_udp | Total | 1023
app_layer.tx.dns_udp | Total | 4717
app_layer.flow.failed_udp | Total | 191
flow_mgr.closed_pruned | Total | 5374
flow_mgr.new_pruned | Total | 375
flow_mgr.est_pruned | Total | 1210
flow.spare | Total | 10000
flow_mgr.rows_checked | Total | 65536
flow_mgr.rows_skipped | Total | 65536
tcp.memuse | Total | 573440
tcp.reassembly_memuse | Total | 129024
http.memuse | Total | 192
flow.memuse | Total | 7194560