Suricata 7.0.1-dev and a giant json data logging spike


Running 7.0.1-dev (c5d83d081 2023-08-07, since a few days I see a giant increase in logged json data which translates into an Elasticsearch index with the name …-packet .
Looking in the gits suricata.yaml I suspect this feature is enabled if not epxlicit disabled?

    # EXPERIMENTAL per packet output giving TCP state tracking details
    # including internal state, flags, etc.
    # This output is experimental, meant for debugging and subject to
    # change in both config and output without any notice.
    #- stream:
    #   all: false                      # log all TCP packets
    #   event-set: false                # log packets that have a decoder/stream event
    #   state-update: false             # log packets triggering a TCP state update
    #   spurious-retransmission: false  # log spurious retransmission packets


Found the culprit:
$ vim output-json-alert.c

if ((p->flags & PKT_HAS_TAG) && (json_output_ctx->flags &
JsonBuilder *packetjs =
CreateEveHeader(p, LOG_DIR_PACKET, “packet”, NULL, json_output_ctx->eve_ctx);
if (unlikely(packetjs != NULL)) {

Changed tagged-packets from yes to no in suricata.yaml for type alert and this look much better.

The stream output you reference is disabled by default. Eve outputs in general are disabled by default unless explicitly listed under the types, so this is unlikely to cause your logs to grow.

Same for the tagged packets, I can’t think of anything that really changed there either. Are you able to narrow in further on what the extra data you are getting is?

I have tagged-packets enabled for years in Suricata without any dramatic uptick in data, most current servers running Suricata v6. Few days back we connected a third span port on a new Suricata 7 IDS server and the data exploded enormous. I still have an ELK index named logstash-packet and wil try tommorow to find out which alert with tag enabled caused this.

We had 2 IDS servers based on Suricata 6 with enabled on the server monitoring our Datacenter an extra set of rules from MISP, the second server monitoring our Campus data without the MISP ruleset. On our new Suricata 7 server, the MISP ruleset is active an is hitting a massive amount of data from the Campus span interface connected since a few days, so Suricata is doing it’s job great in my opinion and I apologize for sounding the alarm.
Found this after creating a new Kibana dashboard with 2 visualizations, one for the logstash-packet indices, one for the logstash-alert indices and then matching on flow id.