Discussion for DPDK API support into Suricata

Now I remember you, on 30 April 2020 you have stated you have used my code and modified the same with a lot of queues. You also wish not to share the changes with community as it is trade secrets.

So for the code base which is trade secret and not able share there nothing much I can help you. I have also figured ‘ring enqueue’ but will only share the logic or fix with open source or free to use community supports

1 Like

Thank you for your reply.
That person(right picture) keep code security is not me :joy:.

  1. I use the code is [DPDK_SURICATA-4_1_1] and dpdk-stable-19.11.4 , add dpdk nic stats code in it . When the send packet rate > 1Gpps , the nic start drop packet .

  2. rss config: queues=6

  3. perf (common_ring_mp_enqueue is called by dpdk default.):


    Statistics print every 10s:

Your yaml configuration is not correct.

my dpdk config:

# DPDK configuration. for use on DPDK instance
dpdk:

  pre-acl: yes
  post-acl: yes
  tx-fragment: no
  rx-reassemble: no
  # BYPASS, IDS, IPS
  mode: IDS
  # port index
  input-output-map: [ 0 ]
  # EAL args
  eal-args: ["--log-level=eal,8", "-l 0", "--file-prefix=suricata-dpdk", "-m 4096"]
  # mempool
  mempool-port-common: [name=suricata-port,n=267456,elt_size=2000,private_data_size=0]
  mempool-reas-common: [name=suricatareassembly,n=8000,elt_size=10000,private_data_size=0]
  # port config
  port-config-0: [mempool=portpool,queues=6,rss-tuple=3,ebpf=NULL,jumbo=no,mtu=1500,tx-mode=0]
  # DPDK pre-acl
  ipv4-preacl: 1024
  ipv6-preacl: 1024

perf report DpdkReleasePacket function uses 40% of CPU time:

As mentioned your yaml is incorrect. For iperf you re trying ids with 1 port. This is not correct

Thanks for your reply!
Here is my yaml config. I have test pfring on this yaml config , it works well when packet rate = 14.88Mpps. I want to konw is there some special configuration I need to add to the yaml config.
%YAML 1.1

vars:
  address-groups:
    HOME_NET: "any"
    
    EXTERNAL_NET: "any"

    HTTP_SERVERS: "$HOME_NET"
    SMTP_SERVERS: "$HOME_NET"
    SQL_SERVERS: "$HOME_NET"
    DNS_SERVERS: "$HOME_NET"
    TELNET_SERVERS: "$HOME_NET"
    AIM_SERVERS: "$EXTERNAL_NET"
    DC_SERVERS: "$HOME_NET"
    DNP3_SERVER: "$HOME_NET"
    DNP3_CLIENT: "$HOME_NET"
    MODBUS_CLIENT: "$HOME_NET"
    MODBUS_SERVER: "$HOME_NET"
    ENIP_CLIENT: "$HOME_NET"
    ENIP_SERVER: "$HOME_NET"

  port-groups:
    HTTP_PORTS: "80"
    SHELLCODE_PORTS: "!80"
    ORACLE_PORTS: 1521
    SSH_PORTS: 22
    DNP3_PORTS: 20000
    MODBUS_PORTS: 502
    FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]"
    FTP_PORTS: 21
    VXLAN_PORTS: 4789
    TEREDO_PORTS: 3544


default-log-dir: /home/log/suricata/

stats:
  enabled: yes
  interval: 8
  #decoder-events-prefix: "decoder.event"

outputs:
  - fast:
      enabled: no
      filename: fast.log
      append: yes             
      totals: yes      ##########################################
      threads: no        ##########################################

  - eve-log:
      enabled: yes
      filetype: regular 
      filename: eve.json
 
      pcap-file: false

      community-id: false
      # Seed value for the ID output. Valid values are 0-65535.
      community-id-seed: 0


      xff:
        enabled: no
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For

      types:
        - alert:
            payload: yes             # enable dumping payload in Base64
            payload-buffer-size: 32kb # max size of payload buffer to output in eve-log
            packet: yes              # enable dumping of packet (without stream segments)
            http-body: yes           # Requires metadata; enable dumping of http body in Base64

            tagged-packets: yes
        - anomaly:
            
            enabled: yes
            types:

        - http:
            extended: yes     

        - dns:
            # Enable/disable this logger. Default: enabled.
            enabled: yes

        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: no   # force logging magic on all logged files
        - smtp:
            extended: yes # enable this for extended logging information
            
        - ftp
        - rdp
        - nfs
        - smb
        - tftp
        - ikev2
        - krb5
        - snmp
        #- sip
        - dhcp:
            enabled: yes
            extended: no
        - ssh
        - stats:
            totals: yes       # stats for all threads merged together
            threads: no       # per thread stats
            deltas: no        # include delta values
            
  - unified2-alert:
      enabled: no

  - http-log:
      enabled: no
      filename: http.log
      append: yes

  - tls-log:
      enabled: no  # Log TLS connections.
      filename: tls.log # File to store TLS logs.
      append: yes

  - tls-store:
      enabled: no

  - pcap-log:
      enabled: no
      filename: log.pcap

      limit: 1000mb

      #max-files: 2000    
      max-files: 20000      ##########################################

      compression: none

      mode: normal # normal, multi or sguil.

      use-stream-depth: no #If set to "yes" packets seen after reaching stream inspection depth are ignored. "no" logs all packets
      honor-pass-rules: no # If set to "yes", flows in which a pass rule matched will stopped being logged.

  - alert-debug:
      enabled: no
      filename: alert-debug.log
      append: yes

  - alert-prelude:
      enabled: no
      profile: suricata
      log-packet-content: no
      log-packet-header: yes

  - stats:
      enabled: yes
      filename: stats.log
      append: yes       # append to file (yes) or overwrite it (no)
      totals: yes       # stats for all threads merged together
      threads: no       # per thread stats

  - syslog:
      enabled: no

      facility: local5

  - drop:
      enabled: no

  - file-store:
      version: 2
      enabled: no

      xff:
        enabled: no
        # Two operation modes are available, "extra-data" and "overwrite".
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For

  # deprecated - file-store v1
  - file-store:
      enabled: no

  - tcp-data:
      enabled: no
      type: file
      filename: tcp-data.log

  - http-body-data:
      enabled: no
      type: file
      filename: http-data.log

  - lua:
      enabled: no
      #scripts-dir: /etc/suricata/lua-output/
      scripts:

logging:

  default-log-level: notice 
  default-output-filter:

  # Define your logging outputs.  If none are defined, or they are all
  # disabled you will get the default - console output.
  outputs:
  - console:
      enabled: yes
      # type: json
  - file:
      enabled: yes
      level: info
      filename: suricata.log
      # type: json
  - syslog:
      enabled: no
      facility: local5
      format: "[%i] <%d> -- "
      # type: json


af-packet:
  - interface: eno1
    # Number of receive threads. "auto" uses the number of cores
    threads: 16
    # Default clusterid. AF_PACKET will load balance packets based on flow.
    cluster-id: 99
    cluster-type: cluster_flow
    defrag: yes
    # To use the ring feature of AF_PACKET, set 'use-mmap' to yes
    use-mmap: yes
    ring-size: 500000             ##########################################
    #buffer-size: 3276800
    buffer-size: 5368709120       ##########################################
    copy-mode: ips
    copy-iface: eno2
  - interface: eno2
    threads: 16
    cluster-id: 98
    cluster-type: cluster_flow
    defrag: yes
    use-mmap: yes
    ring-size: 500000             ##########################################
    #buffer-size: 3276800
    buffer-size: 5368709120       ##########################################
    copy-mode: ips
    copy-iface: eno1

  # Put default values here. These will be used for an interface that is not
  # in the list above.
  - interface: default

pcap:                  ##########################################
  - interface: eth0
  - interface: default

pcap-file:
  checksum-checks: auto


app-layer:
  protocols:
    krb5:
      enabled: yes
    snmp:
      enabled: yes
    ikev2:
      enabled: yes
    tls:
      enabled: yes
      detection-ports:
        dp: 443

    dcerpc:
      enabled: yes
    ftp:
      enabled: yes
      memcap: 512mb     #########################################
    rdp:
      enabled: yes
    ssh:
      enabled: yes
    smtp:
      enabled: yes
      raw-extraction: no
      mime:
        decode-mime: yes
        decode-base64: yes
        decode-quoted-printable: yes
        header-value-depth: 2000
        extract-urls: yes
        body-md5: no
      # Configure inspected-tracker for file_data keyword
      inspected-tracker:
        content-limit: 100000
        content-inspect-min-size: 32768
        content-inspect-window: 4096
    imap:
      enabled: detection-only
    smb:
      enabled: yes
      detection-ports:
        dp: 139, 445
    nfs:
      enabled: yes
    tftp:
      enabled: yes
    dns:
      # memcaps. Globally and per flow/state.
      #global-memcap: 16mb
      #state-memcap: 512kb

      tcp:
        enabled: yes
        detection-ports:
          dp: 53
      udp:
        enabled: yes
        detection-ports:
          dp: 53
    http:
      enabled: yes
      memcap: 4gb          ##########################################
      libhtp:
         default-config:
           personality: IDS
           request-body-limit: 200kb    ##########################################
           response-body-limit: 200kb   ##########################################
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 40kb
           response-body-inspect-window: 16kb
           response-body-decompress-layer-limit: 2
           http-body-inline: auto
           swf-decompression:
             enabled: yes
             type: both
             compress-depth: 0
             decompress-depth: 0
           double-decode-path: no
           double-decode-query: no

         server-config:
    modbus:
      enabled: no
      detection-ports:
        dp: 502
      stream-depth: 0

    # DNP3
    dnp3:
      enabled: no
      detection-ports:
        dp: 20000

    # SCADA EtherNet/IP and CIP protocol support
    enip:                                 ##########################################
      enabled: no
      detection-ports:
        dp: 44818
        sp: 44818

    ntp:
      enabled: yes

    dhcp:
      enabled: yes

    # SIP, disabled by default.
    sip:
      #enabled: no

# Limit for the maximum number of asn1 frames to decode (default 256)
asn1-max-frames: 256

pid-file: /var/run/suricata.pid

coredump:
  max-dump: unlimited
host-mode: auto
max-pending-packets: 65534
runmode: workers
unix-command:
  enabled: auto

legacy:
  uricontent: enabled

action-order:    ##########################################
  - pass
  - drop
  - reject
  - alert

engine-analysis:
  rules-fast-pattern: yes
  rules: yes

#recursion and match limits for PCRE where supported
pcre:
  match-limit: 3500
  match-limit-recursion: 1500

host-os-policy:
  # Make the default policy windows.
  windows: []
  bsd: []
  bsd-right: []
  old-linux: []
  linux: [0.0.0.0/0]       ##########################################
  old-solaris: []
  solaris: []
  hpux10: []
  hpux11: []
  irix: []
  macos: []
  vista: []
  windows2k3: []

# Defrag settings:

defrag:
  memcap: 8gb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  #max-frags: 65535 # number of fragments to keep (higher than trackers)
  max-frags: 1000000  ##########################################
  prealloc: yes
  timeout: 30        ##########################################

flow:
  memcap: 16gb        
  hash-size: 655360
  prealloc: 100000
  emergency-recovery: 30
  #managers: 1 # default to one flow manager
  #recyclers: 1 # default to one flow recycler thread
  prune-flows: 5
  managers: 2          ##########################################
  recyclers: 2       ##########################################

vlan:
  use-for-tracking: true

flow-timeouts:
                      ##########################################
  default:
    new: 5  #30
    established: 20 #300
    closed: 0
    bypassed: 10 #100
    emergency-new: 2 #10
    emergency-established: 10 #100
    emergency-closed: 0
    emergency-bypassed: 5 #50
  tcp:
    new: 5  #60
    established: 20 #600
    closed: 5 #60
    bypassed: 10 #100
    emergency-new: 2 #5
    emergency-established: 10 #100
    emergency-closed: 0 #10
    emergency-bypassed: 5 #50
  udp:
    new: 5 #30
    established: 20 #300
    bypassed: 5 #100
    emergency-new: 2 #10
    emergency-established: 10 #100
    emergency-bypassed: 5 #50
  icmp:
    new: 5 #30
    established: 5 #300
    bypassed: 5 #100
    emergency-new: 2 #10
    emergency-established: 10 #100
    emergency-bypassed: 5 #50

stream:
  memcap: 16gb                  ########################################
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly: 
    memcap: 16gb            ##########################################
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes

host:
  hash-size: 8192      #4096
  prealloc: 2000       #1000
  memcap: 64mb         #32mb

decoder:

  teredo:
    enabled: true
    ports: $TEREDO_PORTS # syntax: '[3544, 1234]' or '3533' or 'any'.
  vxlan:
    enabled: true
    ports: $VXLAN_PORTS # syntax: '8472, 4789'
  # ERSPAN Type I decode support
  erspan:
    typeI:
      enabled: false

detect:
  #profile: medium        
  #custom-values:
    #toclient-groups: 3
    #toserver-groups: 25
  #sgh-mpm-context: auto
  #inspection-recursion-limit: 3000
  
  profile: custom   #################################################
  custom-values:   #################################################
    toclient-groups: 300     #################################################
    toserver-groups: 300     #################################################
    toclient-sp-groups: 300   #################################################
    toclient-dp-groups: 300   #################################################
    toserver-src-groups: 300   #################################################
    toserver-dst-groups: 5400   #################################################
    toserver-sp-groups: 300    #################################################
    toserver-dp-groups: 350    #################################################
  sgh-mpm-context: full      #################################################
  inspection-recursion-limit: 3000



  prefilter:
    default: mpm

  grouping:

  profiling:

    grouping:
      dump-to-disk: false
      include-rules: false      # very verbose
      include-mpm-stats: false

mpm-algo: auto


spm-algo: auto

threading:                      ##################################################
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ "0" ]  # include only these CPUs in affinity settings
        mode: "balanced"
        prio:
          default: "high"        
        
    - worker-cpu-set:
        cpu: [ "4-9","20-29" ]
        #cpu: [ "10-19" ]
        mode: "exclusive"
        prio:
          default: "high"        
                
    - verdict-cpu-set:
        cpu: [ "1-3" ]
        prio:
          default: "high"
         
  detect-thread-ratio: 1.0

luajit:
  states: 128

profiling:
  rules:
    enabled: yes
    filename: rule_perf.log
    append: yes
    limit: 10
    json: yes
  keywords:
    enabled: yes
    filename: keyword_perf.log
    append: yes

  prefilter:
    enabled: yes
    filename: prefilter_perf.log
    append: yes
    
  rulegroups:
    enabled: yes
    filename: rule_group_perf.log
    append: yes

  packets:
    enabled: yes
    filename: packet_stats.log
    append: yes

    csv:
      enabled: no
      filename: packet_stats.csv

  locks:
    enabled: no
    filename: lock_stats.log
    append: yes

  pcap-log:
    enabled: no
    filename: pcaplog_stats.log
    append: yes

nfq:

nflog:

  - group: 2
    # netlink buffer size
    buffer-size: 40000 #18432            ##############################################
    # put default value here
  - group: default
    # set number of packet to queue inside kernel
    qthreshold: 1
    # set the delay before flushing packet in the queue inside kernel
    qtimeout: 5  #100                ################################################
    # netlink max buffer size
    max-size:  500000  #20000           ##############################################

capture:

netmap:

 - interface: eth2         #########
 - interface: default

pfring:
  - interface: zc:eno3@0
    threads: 1
  - interface: zc:eno3@1
    threads: 1
  - interface: zc:eno3@2
    threads: 1
  - interface: zc:eno3@3
    threads: 1
  - interface: zc:eno3@4
    threads: 1
  - interface: zc:eno3@5
    threads: 1
  - interface: zc:eno3@6
    threads: 1
  - interface: zc:eno3@7
    threads: 1
    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 99

    # Default PF_RING cluster type. PF_RING can load balance per flow.
    # Possible values are cluster_flow or cluster_round_robin.
    cluster-type: cluster_flow

ipfw:


napatech:

    streams: ["0-3"]

    auto-config: yes

    ports: [all]

    hashmode: hash5tuplesorted
dpdk:
    pre-acl: yes
    post-acl: yes
    tx-fragment: no
    rx-reassemble: no
    mode: IDS
    input-output-map: [ 0 ]
    eal-args: ["--log-level=eal,8","-l 0","--file-prefix=suricata-dpdk","-m 4096"]
    mempool-port-common: [name=suricata-port,n=267456,elt_size=2000,private_data_size=0]
    mempool-reas-common: [name=suricatareassembly,n=16000,eltsize=10000,private_data_size=0]
    port-config-0: [mempool=portpool,queues=6,rss-tuple=3,ebpf=NULL,jumbo=no,mtu=1500]
    #port-config-1: [mempool=portpool,queues=8,rss-tuple=3,ebpf=NULL,jumbo=no,mtu=1500]
    ipv4-preacl: 1024
    ipv6-preacl: 1024

default-rule-path: /etc/suricata/rules

rule-files:
 - activex.rules
 - adware_pup.rules
 - attack_response.rules
 - chat.rules
 - coinminer.rules
 - current_events.rules


classification-file: /etc/suricata/classification.config
reference-config-file: /etc/suricata/reference.config
# threshold-file: /etc/suricata/threshold.config

I do not face the issue as you mention. My setup is

  1. DPDK: 19.11.6 LTS
  2. Platform: Intel® Xeon® CPU E5-2699 v4 @ 2.20GHz
  3. NIC: 4 * 10G X710

Note:

  1. I have not shared any pull request and one which you have referred here is from personal GitHub experimentation.
  2. This thread is to discuss of DPDK data acquisition layer and offload. So please use it for discussion.
  3. Please feel free to raise issues or clarification on pull request.

Update on the code merge shared below

  1. Shared an early draft on working pull request as Jan 2021 to @vjulien and team
  2. Updates for vendor-neutral contribution agreement discussion started off from Feb 2021
  3. Finalized the new contribution agreement as of the 3rd week of August 2021.
  4. Since delays for the current PR were ongoing, helped in reviewing v2, v3, v4 and v5 (ongoing) for new pull request https://github.com/OISF/suricata/pull/6317

Please reach out to @lukashino as the new merge request for enabling DPDK suricata is from https://github.com/OISF/suricata/pull/6317

1 Like