Suricata Crashes on a regular basis

Hello,
I’m running suricata on Ubuntu 22 in IPS mode. For whatever reason suricata crashes in a non-predictable way after a while. The only message I can see on the console is that suricta was stopped due to running out of virtual memory.

I would appreciate some guidance how to track down the problem. Thank you!

Hi,
Are there any core dumps from the Suricata crash? If so, a stack backtrace would be helpful. We can discuss that if you can locate a core file.

What version of Suricata are you using (suricata -v)

Can you share the suricata.yaml configuration file (location may vary, try sudo find . -name suricata.yaml 2>/dev/null)

What’s the command line used to start suricata?

Hello, I have just started ulimit -c unlimited to get a core file on the next crash, may take a while.

Suricata is 6.0.4.

YAML:

%YAML 1.1
---
vars:
  address-groups:
    HOME_NET: "[46.38.xx/24,192.168.0.0/16]"
    EXTERNAL_NET: "!$HOME_NET"
    HTTP_SERVERS: "$HOME_NET"
    SMTP_SERVERS: "$HOME_NET"
    SQL_SERVERS: "$HOME_NET"
    DNS_SERVERS: "$HOME_NET"
    TELNET_SERVERS: "$HOME_NET"
    AIM_SERVERS: "$EXTERNAL_NET"
    DC_SERVERS: "$HOME_NET"
    DNP3_SERVER: "$HOME_NET"
    DNP3_CLIENT: "$HOME_NET"
    MODBUS_CLIENT: "$HOME_NET"
    MODBUS_SERVER: "$HOME_NET"
    ENIP_CLIENT: "$HOME_NET"
    ENIP_SERVER: "$HOME_NET"
  port-groups:
    HTTP_PORTS: "[80]"
    SHELLCODE_PORTS: "!80"
    ORACLE_PORTS: 1521
    SSH_PORTS: "[22,2222]"
    DNP3_PORTS: 20000
    MODBUS_PORTS: 502
    FILE_DATA_PORTS: "[$HTTP_PORTS,110,143,32400]"
    FTP_PORTS: 21
    GENEVE_PORTS: 6081
    VXLAN_PORTS: 4789
    TEREDO_PORTS: 3544
default-log-dir: /var/log/suricata/
stats:
  enabled: no
  interval: 8
outputs:
  - fast:
      enabled: yes
      filename: fast.log
      append: yes
  - eve-log:
      enabled: no
      filename: eve.json
      pcap-file: false
      community-id: false
      community-id-seed: 0
      xff:
        enabled: no
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For
      types:
        - alert:
            tagged-packets: yes
        - anomaly:
            enabled: yes
            types:
        - http:
        - dns:
        - tls:
        - files:
        - smtp:
        - ftp
        - rdp
        - nfs
        - smb
        - tftp
        - ikev2
        - dcerpc
        - krb5
        - snmp
        - rfb
        - sip
        - dhcp:
            enabled: yes
            extended: no
        - ssh
        - mqtt:
        - stats:
        - flow
  - http-log:
      enabled: no
      filename: http.log
      append: yes
  - tls-log:
      append: yes
  - tls-store:
      enabled: no
  - pcap-log:
      enabled: no
      filename: log.pcap
      limit: 1000mb
      max-files: 2000
      compression: none
  - alert-debug:
      enabled: no
      filename: alert-debug.log
      append: yes
  - alert-prelude:
      enabled: no
      profile: suricata
      log-packet-content: no
      log-packet-header: yes
  - stats:
      enabled: no
      filename: stats.log
  - syslog:
      enabled: no
      facility: local5
  - file-store:
      version: 2
      enabled: no
      xff:
        enabled: no
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For
  - tcp-data:
      enabled: no
      type: file
      filename: tcp-data.log
  - http-body-data:
      enabled: no
      type: file
      filename: http-data.log
  - lua:
      enabled: no
      scripts:
logging:
  default-log-level: notice
  default-output-filter:
  outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      level: info
      filename: suricata.log
  - syslog:
      enabled: no
      facility: local5
      format: "[%i] <%d> -- "
af-packet:
  - interface: eth0
    cluster-id: 99
    cluster-type: cluster_flow
    defrag: yes
  - interface: default
pcap:
  - interface: eth0
  - interface: default
pcap-file:
  checksum-checks: auto
app-layer:
  protocols:
    rfb:
      enabled: yes
      detection-ports:
        dp: 5900, 5901, 5902, 5903, 5904, 5905, 5906, 5907, 5908, 5909
    mqtt:
    krb5:
      enabled: no
    snmp:
      enabled: no
    ikev2:
      enabled: yes
    tls:
      enabled: yes
      detection-ports:
        dp: 443
    dcerpc:
      enabled: no
    ftp:
      enabled: yes
    rdp:
    ssh:
      enabled: yes
    http2:
      enabled: yes
      http1-rules: no
    smtp:
      enabled: no
      raw-extraction: no
      mime:
        decode-mime: yes
        decode-base64: yes
        decode-quoted-printable: yes
        header-value-depth: 2000
        extract-urls: yes
        body-md5: no
      inspected-tracker:
        content-limit: 100000
        content-inspect-min-size: 32768
        content-inspect-window: 4096
    imap:
      enabled: detection-only
    smb:
      enabled: no
      detection-ports:
        dp: 139, 445
    nfs:
      enabled: no
    tftp:
      enabled: yes
    dns:
      tcp:
        enabled: yes
        detection-ports:
          dp: 53
      udp:
        enabled: yes
        detection-ports:
          dp: 53
    http:
      enabled: yes
      libhtp:
         default-config:
           personality: IDS
           request-body-limit: 100kb
           response-body-limit: 100kb
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 40kb
           response-body-inspect-window: 16kb
           response-body-decompress-layer-limit: 2
           http-body-inline: auto
           swf-decompression:
             enabled: yes
             type: both
             compress-depth: 100kb
             decompress-depth: 100kb
           double-decode-path: no
           double-decode-query: no
         server-config:
    modbus:
      enabled: no
      detection-ports:
        dp: 502
      stream-depth: 0
    dnp3:
      enabled: no
      detection-ports:
        dp: 20000
    enip:
      enabled: no
      detection-ports:
        dp: 44818
        sp: 44818
    ntp:
      enabled: yes
    dhcp:
      enabled: yes
    sip:
asn1-max-frames: 256
coredump:
  max-dump: unlimited
host-mode: auto
unix-command:
  enabled: yes
  filename: /var/run/suricata-command.socket
geoip-database: /var/lib/GeoIP/GeoLite2-Country.mmdb
legacy:
  uricontent: enabled
engine-analysis:
  rules-fast-pattern: yes
  rules: yes
pcre:
  match-limit: 3500
  match-limit-recursion: 1500
host-os-policy:
  windows: [0.0.0.0/0]
  bsd: []
  bsd-right: []
  old-linux: []
  linux: []
  old-solaris: []
  solaris: []
  hpux10: []
  hpux11: []
  irix: []
  macos: []
  vista: []
  windows2k3: []
defrag:
  memcap: 32mb
  hash-size: 65536
  prealloc: yes
  timeout: 60
flow:
  memcap: 128mb
  hash-size: 65536
  prealloc: 10000
  emergency-recovery: 30
vlan:
  use-for-tracking: true
flow-timeouts:
  default:
    new: 30
    established: 300
    closed: 0
    bypassed: 100
    emergency-new: 10
    emergency-established: 100
    emergency-closed: 0
    emergency-bypassed: 50
  tcp:
    new: 60
    established: 600
    closed: 60
    bypassed: 100
    emergency-new: 5
    emergency-established: 100
    emergency-closed: 10
    emergency-bypassed: 50
  udp:
    new: 30
    established: 300
    bypassed: 100
    emergency-new: 10
    emergency-established: 100
    emergency-bypassed: 50
  icmp:
    new: 30
    established: 300
    bypassed: 100
    emergency-new: 10
    emergency-established: 100
    emergency-bypassed: 50
stream:
  memcap: 64mb
  reassembly:
    memcap: 256mb
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
host:
  hash-size: 4096
  prealloc: 1000
  memcap: 32mb
decoder:
  teredo:
    enabled: true
  vxlan:
    enabled: true
  vntag:
    enabled: false
  geneve:
    enabled: true
detect:
  profile: medium
  custom-values:
    toclient-groups: 3
    toserver-groups: 25
  sgh-mpm-context: auto
  inspection-recursion-limit: 3000
  prefilter:
    default: mpm
  grouping:
  profiling:
    grouping:
      dump-to-disk: false
      include-mpm-stats: false
mpm-algo: auto
spm-algo: auto
threading:
  set-cpu-affinity: no
  cpu-affinity:
    - management-cpu-set:
    - receive-cpu-set:
    - worker-cpu-set:
        cpu: [ "all" ]
        mode: "exclusive"
        prio:
          low: [ 0 ]
          medium: [ "1-2" ]
          high: [ 3 ]
          default: "medium"
  detect-thread-ratio: 1.0
luajit:
  states: 128
profiling:
  rules:
    enabled: yes
    filename: rule_perf.log
    append: yes
    limit: 10
    json: yes
  keywords:
    enabled: yes
    filename: keyword_perf.log
    append: yes
  prefilter:
    enabled: yes
    filename: prefilter_perf.log
    append: yes
  rulegroups:
    enabled: yes
    filename: rule_group_perf.log
    append: yes
  packets:
    enabled: yes
    filename: packet_stats.log
    append: yes
    csv:
      enabled: no
      filename: packet_stats.csv
  locks:
    enabled: no
    filename: lock_stats.log
    append: yes
  pcap-log:
    enabled: no
    filename: pcaplog_stats.log
    append: yes
nfq:
  mode: accept
nflog:
  - group: 2
    buffer-size: 18432
  - group: default
    qthreshold: 1
    qtimeout: 100
    max-size: 20000
capture:
netmap:
 - interface: eth2
 - interface: default
pfring:
  - interface: eth0
    threads: auto
    cluster-id: 99
    cluster-type: cluster_flow
  - interface: default
ipfw:
napatech:
    streams: ["0-3"]
    enable-stream-stats: no
    auto-config: yes
    hardware-bypass: yes
    inline: no
    ports: [0-1,2-3]
    hashmode: hash5tuplesorted
default-rule-path: /etc/suricata/rules
rule-files:
   - /etc/suricata/rules/geoip.rules
   - /var/lib/suricata/rules/suricata.rules
classification-file: /etc/suricata/classification.config
reference-config-file: /etc/suricata/reference.config


6.0.4 is very old, please update to the latest stable release that includes a lot of fixes which is 6.0.13 and see if it still happens.

I have updated to v7.0.0. All seems to be working fine. Unfortuntely I lost suricatasc - so no reload of rules is possible any more?

suricatasc
Unable to connect to socket /var/run/suricata/suricata-command.socket: L177: [Errno 2] No such file or directory

I start suricata with suricata -c /etc/suricata/suricata.yaml -q 0. The regular update is done with

suricata-update --reload-command='suricatasc -c ruleset-reload-nonblocking'

This works.

Upgrade shouldn’t lead to losing suricatasc. Please make sure the socket file exists and permissions are in order for the dir. Also, check the user invoking Suricata and the one invoking suricatasc. Please let us know if the problem persists.

/var/run only has the following suricata related stuff:

srw-rw---- 1 root root 0 Aug 7 16:53 suricata-command.socket

At least an executable suricatasc is located in /bin.

Any solution to that?

Hello, does anyone have an update on that issue?

How does your suricata.yaml look like? Based on the error message the socket is expected at:

/var/run/suricata/suricata-command.socket

but you said it’s in /var/run/ directly.

There is no /var/run/suricata. And the yaml indicates

filename: /var/run/suricata-command.socket

I circumvented the issue by adding the path to the socket to the suricatasc command:

suricatasc /var/run/suricata-command.socket -c ruleset-reload-nonblocking

This works.