Need help Setting up DPDK OVS with Suricata

Please include the following information with your help request:

  • Suricata version : 7.0.10
  • Operating system and/or Linux distribution : Ubuntu 22.04.5 6.1.0-37-amd64
  • How you installed Suricata (from source, packages, something else) : Build it from release

Hi There,
I am trying to setup a DPDK based link that flows traffic from eth0 to OVS and then to suricata using vdev iface, i have got dpdk setup like binding, hugepages etc , also setup suricata build which is compatible with dpdk and i have setup ovs with dpdk and setup a bridge as well, but i am unable to start suricata with error:
Error: dpdk: DPDK configuration could not be parsed
my config file for suricata:

%YAML 1.1
---
dpdk:
  eal-params:
    proc-type: primary
#    iova-mode: pa
    vdev: ['net_vhost0,iface=/var/run/openvswitch/vhost-user0.sock', 'net_vhost1,iface=/var/run/openvswitch/vhost-user1.sock']
#    a: "0000:00:12.0"
#    no-huge: true
#    m: 1000
#    n: 1
#    v: true
#    main-lcore: 0
  interfaces:
#    - interface: "0000:00:12.0"
    - interface: "net_vhost0"
    - interface: "net_vhost1"
threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]
    - worker-cpu-set:
        cpu: [ "all" ]

runmode: workers

Runtime messages:

suricata -c ./dpdk-config.yml --dpdk -vv
Notice: suricata: This is Suricata version 7.0.10 RELEASE running in SYSTEM mode
Info: cpu: CPUs/cores online: 4
Info: suricata: Setting engine mode to IDS mode by default
Info: suricata: No 'host-mode': suricata is in IDS mode, using default setting 'sniffer-only'
Warning: counters: stats are enabled but no loggers are active
Info: detect: No signatures supplied.
Warning: dpdk: "all" specified in worker CPU cores affinity, excluding management threads
Error: dpdk: DPDK configuration could not be parsed

OVS Bridge :

ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs="0000:00:12.0"

ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuserclient options:vhost-server-path=/var/run/openvswitch/vhost-user0.sock
ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuserclient options:vhost-server-path=/var/run/openvswitch/vhost-user1.sock

dpdk-devbind.py -s :

dpdk-devbind.py -s

Network devices using DPDK-compatible driver
============================================
0000:00:12.0 'Virtio network device 1000' drv=igb_uio unused=virtio_pci

What i want is a single machine setup which has internet and ssh access with suricata dpdk based filtering, what is this parsing error, i tried it a lot but no avail, any docs available how to correctly define it.

Except suricata i don’t have any error visible whatsoever.

Hi Aayush,

thanks for reaching out.
Your config seems sound to me and works on Suricata 8 (master branch) but not on Suricata 7. I will check it out and send a fix soon.

Edit:
I just realized a thing I fixed last week (coming to 7.0.11) - undefined copy-iface/mode leads to an error in parsing. So when I defined those in your config, I was able to move forward.

%YAML 1.1
---
dpdk:
  eal-params:
    proc-type: primary
#    iova-mode: pa
    vdev: ['net_vhost0,iface=/var/run/openvswitch/vhost-user0.sock', 'net_vhost1,iface=/var/run/openvswitch/vhost-user1.sock']
#    a: "0000:00:12.0"
#    no-huge: true
#    m: 1000
#    n: 1
#    v: true
#    main-lcore: 0
  interfaces:
#    - interface: "0000:00:12.0"
    - interface: "net_vhost0"
      copy-iface: none
      copy-mode: ids
    - interface: "net_vhost1"
      copy-iface: none
      copy-mode: none
threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]
    - worker-cpu-set:
        cpu: [ "all" ]

runmode: workers

Hello Thanks @lukashino i can confirm the latest dev release starts. My config file:

dpdk:
  eal-params:
    proc-type: primary
    vdev:
      - "net_vhost0,iface=/var/run/openvswitch/vhost-user1.sock,queues=1"
      - "net_vhost1,iface=/var/run/openvswitch/vhost-user2.sock,queues=1"
  interfaces:
    - interface: "net_vhost0"
    - interface: "net_vhost1"
    - interface: default
      threads: 1
threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]
    - worker-cpu-set:
        cpu: [ "all" ]

runmode: workers

default-rule-path: /root
rule-files:
  - test.rules

outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      level: info
      filename: suricata.log
  - eve-log:
      enabled: yes
      filetype: regular
      filename: eve.json
  - fast:
      enabled: yes
      filename: fast.log
      append: yes

still i am facing error specifically, specifically on ovs and suricata side likely they are not able to connect i have checked various other things like hugepages etc, all is running on root user. i am able to ping 1.1.1.1 but likely between dpdk0 and LOCAL its working.

ovs-ofctl dump-ports br0
OFPST_PORT reply (xid=0x2): 4 ports
  port  "vhost-user1": rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
           tx pkts=?, bytes=?, drop=?, errs=?, coll=?
  port  "vhost-user2": rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
           tx pkts=?, bytes=?, drop=?, errs=?, coll=?
  port  dpdk0: rx pkts=9, bytes=1008, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=9, bytes=686, drop=0, errs=0, coll=?
  port LOCAL: rx pkts=389, bytes=28602, drop=0, errs=0, frame=0, over=0, crc=0
           tx pkts=181, bytes=20334, drop=0, errs=0, coll=0

Suricata gives out following error:

Notice: suricata: This is Suricata version 8.0.0-dev (055d270b9 2025-06-27) running in SYSTEM mode [LogVersion:suricata.c:1203]
Info: cpu: CPUs/cores online: 4 [UtilCpuPrintSummary:util-cpu.c:149]
Info: suricata: Setting engine mode to IDS mode by default [PostConfLoadedSetup:suricata.c:2790]
Warning: suricata: Invalid conf entry found for "host-mode".  Using default value of "auto". [PostConfLoadedSetupHostMode:suricata.c:2667]
Warning: runmodes: No output module named console [RunModeInitializeOutputs:runmodes.c:868]
Info: logopenfile: eve-log output device (regular) initialized: eve.json [SCConfLogOpenGeneric:util-logopenfile.c:648]
Info: logopenfile: fast output device (regular) initialized: fast.log [SCConfLogOpenGeneric:util-logopenfile.c:648]
Warning: runmodes: No output module named file [RunModeInitializeOutputs:runmodes.c:868]
Warning: counters: stats are enabled but no loggers are active [StatsInitCtxPostOutput:counters.c:313]
Info: detect: 1 rule files processed. 1 rules successfully loaded, 0 rules failed, 0 rules skipped [SigLoadSignatures:detect-engine-loader.c:473]
Info: threshold-config: Threshold config parsed: 0 rule(s) found [SCThresholdConfParseFile:util-threshold-config.c:1015]
Info: detect: 1 signatures processed. 1 are IP-only rules, 0 are inspecting packet payload, 0 inspect application layer, 0 are decoder event only [SigPrepareStaage1:detect-engine-build.c:1829]
Warning: dpdk: "all" specified in worker CPU cores affinity, excluding management threads [ConfigSetThreads:runmode-dpdk.c:407]
Warning: dpdk: net_vhost0: changing MTU is not supported, current MTU: 1500 [DeviceConfigure:runmode-dpdk.c:1771]
Info: dpdk: net_vhost0: creating 1 packet mempools of size 65535, cache size 257, mbuf size 2176 [DeviceConfigureQueues:runmode-dpdk.c:1459]
Info: runmodes: net_vhost0: creating 1 thread [RunModeSetLiveCaptureWorkersForDevice:util-runmodes.c:258]
Warning: dpdk: net_vhost1: changing MTU is not supported, current MTU: 1500 [DeviceConfigure:runmode-dpdk.c:1771]
Info: dpdk: net_vhost1: creating 1 packet mempools of size 65535, cache size 257, mbuf size 2176 [DeviceConfigureQueues:runmode-dpdk.c:1459]
Info: runmodes: net_vhost1: creating 1 thread [RunModeSetLiveCaptureWorkersForDevice:util-runmodes.c:258]
VHOST_CONFIG: (device) (-1) device not found.
VHOST_CONFIG: (device) (-1) device not found.
Notice: threads: Threads created -> W: 2 FM: 1 FR: 1   Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1982]

rule file:

drop icmp any any -> 1.1.1.1 any (msg:"Block ICMP to 1.1.1.1"; sid:1000001; rev:1;)

rest remains same as previous message for config etc. i tired same on /tmp folder so as to avoid permissions, etc issue.

I don’t currently have OvS running but looking at e.g. https://superuser.com/questions/1793929/how-to-send-traffic-with-testpmd-to-openvswitch-when-ovs-is-using-dpdkvhostuserc
They are using net_virtio_user virtual driver instead of net_vhost driver as you use on Suricata side. Can you maybe retry that?

I would suggest first making the setup work with testpmd, as this suits your use case completely (forwarding between two ports), then moving DPDK parameters into Suricata config. You will also get more results on Google as well.

Btw, if you intend to connect the two interfaces (to pass traffic from net_vhost0 to net_vhost1 and vice versa), you need to define copy-iface and copy-mode anyway, otherwise it will be a receiver only and Suricata won’t send packets (IDS vs IPS difference).

So your config should be:

dpdk:
  eal-params:
    proc-type: primary
    vdev:
      - "net_vhost0,iface=/var/run/openvswitch/vhost-user1.sock,queues=1"
      - "net_vhost1,iface=/var/run/openvswitch/vhost-user2.sock,queues=1"
  interfaces:
    - interface: "net_vhost0"
      copy-iface: net_vhost1
      copy-mode: ips
    - interface: "net_vhost1"
      copy-iface: net_vhost0
      copy-mode: ips
    - interface: default
      threads: 1
threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]
    - worker-cpu-set:
        cpu: [ "all" ]

runmode: workers

default-rule-path: /root
rule-files:
  - test.rules

outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      level: info
      filename: suricata.log
  - eve-log:
      enabled: yes
      filetype: regular
      filename: eve.json
  - fast:
      enabled: yes
      filename: fast.log
      append: yes