Suricata eve-log corrupted lines

It looks like some log lines contain entries from more than one event. My configuration contain prefix that should be added to each line. Is problem in my configuration or in Suricata itself ? How could I fix that ?

Running Suricata 5.0.2 configured with:

./configure --enable-nfqueue \
            --enable-python \
            --enable-lua \
            --with-libnetfilter_queue-libraries=...
            --with-libnetfilter_queue-includes=...
            --with-libhs-includes=...
            --with-libhs-libraries=...
            --with-liblua-includes=...
            --with-liblua-libraries=...
            --with-libnet-libraries=...
            --with-libnet-includes=...
            --with-libnspr-libraries=...
            --with-libnspr-includes=...
            --with-libnss-libraries=...
            --with-libnss-includes=...
            --disable-gccmarch-native \
            "--prefix=..."

Log entry:

eve.log,012345678912,us-east-1,c289976f-ede2-4eaf-86df-43dc90fbd308,172.16.0.102,{"timestamp":"2020-07-20T14:19:27.000082+0000","flow_id":1549502459840749,"in_iface":"eth0","event_type":"flow","src_ip":"172.16.0.119","src_port":34057,"dest_ip":"172.16.0.102","dest_port":8443,"proto":"TCP","flow":{"pkts_toserver":4,"pkts_toclient":2,"bytes_toserver":272,"bytes_toclient":140,"eve.log,012345678912,us-east-1,c289976f-ede2-4eaf-86df-43dc90fbd308,172.16.0.102,{"timestamp":"2020-07-24T04:14:01.199827+0000","flow_id":286310573608083,"in_iface":"eth0","event_type":"dns","src_ip":"172.16.0.102","src_port":38272,"dest_ip":"172.16.0.2","dest_port":53,"proto":"UDP","dns":{"type":"query","id":45989,"rrname":"1.172.254.169.in-addr.arpa","rrtype":"PTR","tx_id":0}}

Configuration:

vars:
  address-groups:
    HOME_NET: "[172.16.0.102]"
    EXTERNAL_NET: "!$HOME_NET"
    HTTP_SERVERS: "$HOME_NET"
    SMTP_SERVERS: "$HOME_NET"
    SQL_SERVERS: "$HOME_NET"
    DNS_SERVERS: "$HOME_NET"
    TELNET_SERVERS: "$HOME_NET"
    AIM_SERVERS: "$EXTERNAL_NET"
    DC_SERVERS: "$HOME_NET"
    DNP3_SERVER: "$HOME_NET"
    DNP3_CLIENT: "$HOME_NET"
    MODBUS_CLIENT: "$HOME_NET"
    MODBUS_SERVER: "$HOME_NET"
    ENIP_CLIENT: "$HOME_NET"
    ENIP_SERVER: "$HOME_NET"
  port-groups:
    HTTP_PORTS: "80"
    SHELLCODE_PORTS: "!80"
    ORACLE_PORTS: 1521
    SSH_PORTS: 22
    DNP3_PORTS: 20000
    MODBUS_PORTS: 502
    FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]"
    FTP_PORTS: 21
    GENEVE_PORTS: 6081
    VXLAN_PORTS: 4789
default-log-dir: /opt/suricata/logs/
stats:
  enabled: yes
  interval: 8
outputs:
  - fast:
      enabled: no
      filename: fast.log
      append: yes
  - eve-log:
      enabled: yes
      filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
      filename: eve.log
      prefix: "eve.log,012345678912,us-east-1,c289976f-ede2-4eaf-86df-43dc90fbd308,172.16.0.102," # prefix to prepend to each log entry
      pcap-file: false
      community-id: false
      community-id-seed: 0
      xff:
        enabled: no
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For
      types:
        - alert:
            tagged-packets: yes
        - anomaly:
            enabled: yes
            types:
        - http:
            extended: yes     # enable this for extended logging information
        - dns:
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: no   # force logging magic on all logged files
        - smtp:
        - ftp
        - nfs
        - smb
        - tftp
        - ikev2
        - krb5
        - snmp
        - dhcp:
            enabled: yes
            extended: no
        - ssh
        - stats:
            totals: yes       # stats for all threads merged together
            threads: no       # per thread stats
            deltas: no        # include delta values
        - flow
  - unified2-alert:
      enabled: no
  - http-log:
      enabled: no
      filename: http.log
      append: yes
  - tls-log:
      enabled: no  # Log TLS connections.
      filename: tls.log # File to store TLS logs.
      append: yes
  - tls-store:
      enabled: no

  - pcap-log:
      enabled: no
      filename: log.pcap
      limit: 1000mb
      max-files: 2000
      compression: none
      mode: normal # normal, multi or sguil.
      use-stream-depth: no #If set to "yes" packets seen after reaching stream inspection depth are ignored. "no" logs all packets
      honor-pass-rules: no # If set to "yes", flows in which a pass rule matched will stopped being logged.
  - alert-debug:
      enabled: no
      filename: alert-debug.log
      append: yes
  - alert-prelude:
      enabled: no
      profile: suricata
      log-packet-content: no
      log-packet-header: yes
  - stats:
      enabled: no
      filename: stats.log
      append: yes       # append to file (yes) or overwrite it (no)
      totals: yes       # stats for all threads merged together
      threads: no       # per thread stats
  - syslog:
      enabled: no
      facility: local5
  - drop:
      enabled: no
  - file-store:
      version: 2
      enabled: no

      xff:
        enabled: no
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For
  - file-store:
      enabled: no
  - tcp-data:
      enabled: no
      type: file
      filename: tcp-data.log
  - http-body-data:
      enabled: no
      type: file
      filename: http-data.log
  - lua:
      enabled: no
      scripts:
logging:
  default-log-level: notice
  default-output-filter:
  outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      level: info
      filename: suricata.log
  - syslog:
      enabled: no
      facility: local5
      format: "[%i] <%d> -- "
af-packet:
  - interface: eth0
    cluster-id: 99
    cluster-type: cluster_flow
    defrag: yes

  - interface: default
pcap:
  - interface: eth0
  - interface: default
pcap-file:
  checksum-checks: auto
app-layer:
  protocols:
    krb5:
      enabled: yes
    snmp:
      enabled: yes
    ikev2:
      enabled: yes
    tls:
      enabled: yes
      detection-ports:
        dp: 443
      ja3-fingerprints: yes
    dcerpc:
      enabled: yes
    ftp:
      enabled: yes
    rdp:
    ssh:
      enabled: yes
    smtp:
      enabled: yes
      raw-extraction: no
      mime:
        decode-mime: yes
        decode-base64: yes
        decode-quoted-printable: yes
        header-value-depth: 2000
        extract-urls: yes
        body-md5: no
      inspected-tracker:
        content-limit: 100000
        content-inspect-min-size: 32768
        content-inspect-window: 4096
    imap:
      enabled: detection-only
    smb:
      enabled: yes
      detection-ports:
        dp: 139, 445
    nfs:
      enabled: yes
    tftp:
      enabled: yes
    dns:
      tcp:
        enabled: yes
        detection-ports:
          dp: 53
      udp:
        enabled: yes
        detection-ports:
          dp: 53
    http:
      enabled: yes
      libhtp:
         default-config:
           personality: IDS
           request-body-limit: 100kb
           response-body-limit: 100kb
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 40kb
           response-body-inspect-window: 16kb
           response-body-decompress-layer-limit: 2
           http-body-inline: auto
           swf-decompression:
             enabled: yes
             type: both
             compress-depth: 0
             decompress-depth: 0
           double-decode-path: no
           double-decode-query: no
         server-config:
    modbus:
      enabled: no
      detection-ports:
        dp: 502
      stream-depth: 0
    dnp3:
      enabled: no
      detection-ports:
        dp: 20000
    enip:
      enabled: no
      detection-ports:
        dp: 44818
        sp: 44818
    ntp:
      enabled: yes
    dhcp:
      enabled: yes
    sip:
asn1-max-frames: 256

coredump:
  max-dump: unlimited
host-mode: auto
unix-command:
  enabled: no
legacy:
  uricontent: enabled
engine-analysis:
  rules-fast-pattern: yes
  rules: yes
pcre:
  match-limit: 3500
  match-limit-recursion: 1500
host-os-policy:
  windows: [0.0.0.0/0]
  bsd: []
  bsd-right: []
  old-linux: []
  linux: []
  old-solaris: []
  solaris: []
  hpux10: []
  hpux11: []
  irix: []
  macos: []
  vista: []
  windows2k3: []
defrag:
  memcap: 32mb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep (higher than trackers)
  prealloc: yes
  timeout: 60

flow:
  memcap: 128mb
  hash-size: 65536
  prealloc: 10000
  emergency-recovery: 30
vlan:
  use-for-tracking: true
flow-timeouts:
  default:
    new: 30
    established: 300
    closed: 0
    bypassed: 100
    emergency-new: 10
    emergency-established: 100
    emergency-closed: 0
    emergency-bypassed: 50
  tcp:
    new: 60
    established: 600
    closed: 60
    bypassed: 100
    emergency-new: 5
    emergency-established: 100
    emergency-closed: 10
    emergency-bypassed: 50
  udp:
    new: 30
    established: 300
    bypassed: 100
    emergency-new: 10
    emergency-established: 100
    emergency-bypassed: 50
  icmp:
    new: 30
    established: 300
    bypassed: 100
    emergency-new: 10
    emergency-established: 100
    emergency-bypassed: 50

stream:
  memcap: 64mb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 256mb
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
host:
  hash-size: 4096
  prealloc: 1000
  memcap: 32mb
decoder:
  teredo:
    enabled: true
  vxlan:
    enabled: true
    ports: $VXLAN_PORTS # syntax: '8472, 4789'
detect:
  profile: medium
  custom-values:
    toclient-groups: 3
    toserver-groups: 25
  sgh-mpm-context: auto
  inspection-recursion-limit: 3000
  prefilter:
    default: mpm
  grouping:
  profiling:
    grouping:
      dump-to-disk: false
      include-rules: false      # very verbose
      include-mpm-stats: false
mpm-algo: auto
spm-algo: auto
threading:
  set-cpu-affinity: no
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "all" ]
        mode: "exclusive"
        prio:
          low: [ 0 ]
          medium: [ "1-2" ]
          high: [ 3 ]
          default: "medium"
  detect-thread-ratio: 1.0
luajit:
  states: 128
profiling:
  rules:
    enabled: yes
    filename: rule_perf.log
    append: yes
    limit: 10
    json: yes
  keywords:
    enabled: yes
    filename: keyword_perf.log
    append: yes
  prefilter:
    enabled: yes
    filename: prefilter_perf.log
    append: yes
  rulegroups:
    enabled: yes
    filename: rule_group_perf.log
    append: yes
  packets:
    enabled: yes
    filename: packet_stats.log
    append: yes
    csv:
      enabled: no
      filename: packet_stats.csv
  locks:
    enabled: no
    filename: lock_stats.log
    append: yes
  pcap-log:
    enabled: no
    filename: pcaplog_stats.log
    append: yes
nfq:
nflog:
  - group: 2
    buffer-size: 18432
  - group: default
    qthreshold: 1
    qtimeout: 100
    max-size: 20000
capture:
netmap:
 - interface: eth2

 - interface: default
pfring:
  - interface: eth0
    threads: auto
    cluster-id: 99
    cluster-type: cluster_flow
  - interface: default
ipfw:
napatech:
    streams: ["0-3"]
    auto-config: yes
    ports: [all]
    hashmode: hash5tuplesorted
default-rule-path: /opt/suricata/etc/suricata/rules
rule-files:
  - all.rules
classification-file: /opt/suricata/etc/suricata/classification.config
reference-config-file: /opt/suricata/etc/suricata/reference.config
threshold-file: /opt/suricata/etc/suricata/threshold.config

Hello Zilvinas,

Could you please tell us which version are you using? I am unable to replicate in a simple test setup. But I’m wondering, do you have any log rotation happening and does this occur around that time?

Hello Jason,

There is no log rotation configured since we are running Suricata in container that is rotated after couple of days.

I am running Suricata 5.0.2 configured with:

./configure --enable-nfqueue \
            --enable-python \
            --enable-lua \
            --with-libnetfilter_queue-libraries=...
            --with-libnetfilter_queue-includes=...
            --with-libhs-includes=...
            --with-libhs-libraries=...
            --with-liblua-includes=...
            --with-liblua-libraries=...
            --with-libnet-libraries=...
            --with-libnet-includes=...
            --with-libnspr-libraries=...
            --with-libnspr-includes=...
            --with-libnss-libraries=...
            --with-libnss-includes=...
            --disable-gccmarch-native \
            "--prefix=..."

One other question. Is there just one instance of Suricata running?

Yes, there is one Suricata instance running inside Docker container. Suricata logs are shipped with CloudWatch agent. Currently I am trying to isolate this issue (it might be that CloudWatch agent is being silly).

I still don’t have a solution for you. But the usual causes I’ve seen for this are:

  • more than one instance of Suricata logging to the same location
  • Log rotation using file truncation rather than the move/sighup way

and I don’t really know much about CloudWatch.