VPP-Suricata Integration: Library Mode Packet Injection

We’re integrating VPP (Vector Packet Processing) with Suricata 8.0.0 library mode for real-time IDPS functionality. The goal is to inject packets from VPP plugin thread into Suricata for detection with ProofPoint rules, then receive native EVE JSON alerts.

Three Approaches Attempted

Approach 1: Direct Library Integration

API Flow:


// VPP Plugin Thread:

LibSuricataInjectRaw(pkt_data, pkt_len, linktype, ts_ns);

↓ (inside libvppinject.c wrapper for suricata library APIs)

PacketGetFromQueueOrAlloc() → PacketCopyData() → DecodeEthernet()

↓

Manual signature matching → AlertQueueAppend() → Custom EVE JSON generation

Issues:

  • Counter assertion failures in library mode (StatsRegisterCounter)

  • Thread context mismatch (VPP thread vs Suricata thread)

  • DecodeEthernet crashes due to stats counter registration

Approach 2: Crash-Safe Library Mode

API Flow:


// VPP Plugin Thread:

LibSuricataInjectRaw(pkt_data, pkt_len, linktype, ts_ns);

↓ (crash-safe version)

PacketGetFromQueueOrAlloc() → PacketCopyData() → Manual protocol detection

↓

Bypass all decoders → Direct signature matching → Custom EVE JSON

Issues:

  • Disabled decoders limit detection capabilities

  • Bypasses Suricata’s native pipeline

  • Manual EVE generation instead of native output

Approach 3: Queue-Based Input Module

API Flow:


// VPP Plugin Thread:

PacketGetFromQueueOrAlloc() → PacketCopyData() → VppPacketQueuePush()

↓ (Thread-safe queue handoff)

// Suricata Input Module Thread:

VppPacketQueuePop() → TmThreadsCaptureInjectPacket(tv, p);

↓

Native Suricata pipeline → Full decoding → Detection → Native EVE output

Configuration:


suricata:

library-mode: true

runmode: library

Questions for Community

  1. Library Mode Thread Context: Is TmThreadsCaptureInjectPacket() the correct API for library mode packet injection? Should packets always be injected from Suricata’s thread context rather than external threads?

  2. Input Module Registration: For custom input modules in library mode, should we register via TmModuleReceive*Register() functions or is there a library-specific registration mechanism?

  3. Stats Counter Issue: Why do StatsRegisterCounter() calls fail in library mode? Is there a proper way to initialize counters, or should library mode avoid them entirely?

  4. Best Practice Architecture: For high-performance packet processing (VPP integration), is the queue-based approach (Approach 3) the recommended pattern, or is there a more efficient direct injection method?

  5. EVE Output: In library mode with custom input modules, does native EVE JSON output work automatically, or do we need additional configuration for outputs.eve-log?

The queue-based approach (Approach 3) works but adds latency. Is there a direct library API that provides the same native processing without the queue overhead?

Environment: Suricata 8.0.0, VPP 23.06, Ubuntu 20.04, library mode integration

Queuing is the only supported way right now, as shown in our example: suricata/examples/lib/custom/main.c at main · OISF/suricata · GitHub

If you have use cases that don’t work with this setup, I recommend opening a ticket discussion the use case.

Thanks.

Thanks for the response.
Query1: Could you please give us an example for runmode live instead of offline. That would be really helpful. (Referring to this runmode SCConfSetFromString(“runmode=live”, 1)
Query2: When the proopoint rules are matched for an ICMP packet received, we are expecting that suricata library will raise an alert/log the packet reception in eve.json file. We have set the eve.json path in suricata.yaml file and loaded it but still we dont see any logs in eve.json when icmp packets are received. [proofpoint rule file has alert icmp any any → any any (msg:“Any ICMP Traffic”; sid:1000005; ]rev:1;)

I’ve updated the ticket, which includes a link to a live example: Support #8082: What APIs should be used for Suricata as a library in runmode=live - Suricata - Open Information Security Foundation

I did give the new example a test with a rule file, test.rules:

alert icmp any any -> any any (msg:"Any ICMP Traffic"; sid:1; rev:1;)

And have run it with the live example like:

./live -i eth0 -- -S ./test.rules

And I am seeing ICMP alerts in the in eve.json in the current directory.

Thanks alot for the example. It really helps. We will try this out and get back.

Here is a brief on the VPP IDPS packet inspection goal which we are working upon:

  1. The VPP binary has a plugin for each module. Likewise we have a plugin for IDPS (Intrusion Detection and Prevention System).
  2. The packets received on any interface would be copied CPU in the VPP module which will be given to IDPS plugin over a call back function.
  3. So each packet (pkt buffer) received on IDPS plugin, needs to be injected to Suricata (running as library for inspection). The updated proofpoint rules are downloaded from cloud server per day or per week based on the configurations/design.

My next question is that, how do we handle ruleset updates?
What are the suricata library APIs to be used for that?

Thanks for the response and the live mode example.

Could you please extend the runmode=live example to demonstrate how to re-read or reload the ruleset in non-blocking mode, suitable for use in a library context?
Specifically, we are interested in how to trigger rule reloads (e.g., after updating proofpoint.rules) without blocking packet processing or causing downtime.

An example or recommended API usage for non-blocking rule reload in runmode=live would be very helpful.

Hi Indira,

I’m also very interested in the elegant integration of libsuricata within VPP. Have you successfully implemented it yet?

As far as I know, Suricata uses a Unix socket to receive rule update commands and provides the suricatasc tool to interface with it. The implementation of these commands can be found in unix-manager.c; you can search for the keyword “reload-rules” to see the logic.

Regarding enabling this feature in library mode, I think you should look into UnixManagerThreadSpawnNonRunmode, which serves as the entry point for spawning the Unix manager in non-runmode contexts.

This approach allows you to maintain the management socket even when Suricata is embedded within VPP, ensuring that suricatasc can still trigger hot-reloads without interrupting the VPP worker threads.

Thanks @zhenjun We are in the process of integrating suricata with vpp. We will try the logic you mentioned as well.

@zhenjun or @ish ,
We have vpp plugin library which receives packets as packet buf.
This library is linked to suricata library.
During vpp plugin init, suricata init, runmode, loading yaml is implemented.
And we have a new file libvppinject.c which is complied as part of suricata library and it is part of our locally compiled libsuricata.so
Inside libvppinject.c, we created a new thread and inside which we have called g_vpp_tv = SCRunModeLibCreateThreadVars(worker_id);
if (SCRunModeLibSpawnWorker(g_vpp_tv) != 0) {
SCLogError(“VppWorkerThreadMain: SCRunModeLibSpawnWorker failed”);
return NULL;
}

But we always notice that the SCRunModeLibSpawnWorker waits/blocks infinitely. Is this a blocking API?
Here is the backtrace for reference:

  8    Thread 0x7f31e483a640 (LWP 2589869) "W#01"          0x00007f32398d4035 in __GI___clock_nanosleep (

    clock_id=clock_id@entry=0, flags=flags@entry=0, req=req@entry=0x7f31e4839bd0, rem=rem@entry=0x0)

    at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:48

(gdb) thread 8

[Switching to thread 8 (Thread 0x7f31e483a640 (LWP 2589869))]

#0  0x00007f32398d4035 in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,

    req=req@entry=0x7f31e4839bd0, rem=rem@entry=0x0) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:48

48      in ../sysdeps/unix/sysv/linux/clock_nanosleep.c

(gdb) bt

#0  0x00007f32398d4035 in __GI___clock_nanosleep (clock_id=clock_id@entry=0, flags=flags@entry=0,

    req=req@entry=0x7f31e4839bd0, rem=rem@entry=0x0) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:48

#1  0x00007f32398d8847 in __GI___nanosleep (req=req@entry=0x7f31e4839bd0, rem=rem@entry=0x0)

    at ../sysdeps/unix/sysv/linux/nanosleep.c:25

#2  0x00007f3239902ad9 in usleep (useconds=useconds@entry=100) at ../sysdeps/posix/usleep.c:31

#3  0x00007f31f165556a in TmThreadsWaitForUnpause (tv=0x7f1db80017b0) at tm-threads.c:369

#4  TmThreadsWaitForUnpause (tv=0x7f1db80017b0) at tm-threads.c:363

#5  0x00007f31f1655fb8 in TmThreadsLib (td=0x7f1db80017b0) at tm-threads.c:399

#6  0x00007f31f1657c63 in TmThreadLibSpawn (tv=tv@entry=0x7f1db80017b0) at tm-threads.c:1784

#7  0x00007f31f160e17d in SCRunModeLibSpawnWorker (td=0x7f1db80017b0) at runmode-lib.c:99

#8  0x00007f31f14d258d in VppWorkerThreadMain (arg=<optimized out>) at libvppinject.c:736

#9  0x00007f323988a816 in start_thread (arg=<optimized out>) at pthread_create.c:442

#10 0x00007f323990c990 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

(gdb) c

Continuing.

 

SCRunModeLibSpawnWorker waits/blocks infinitely. Is this a blocking API?

Sort of. It waits for the rest of Suricata to be ready, then returns. It does involve some thread synchronization.

If you look at the example: suricata/examples/lib/custom/main.c at main · OISF/suricata · GitHub

You’ll see that SimpleWorker is spawned in a thread, which then calls SCRunModeLibSpawnWorker, meanwhile SuricataPostInit is called. This is where some thread synchronization is required. Its not ideal, but unlikely to change until Suricata 9.0.

May I know when will suricata 9.0 be available

Not anytime soon. We stick to our roadmap dates mostly so you can get an idea here: https://redmine.openinfosecfoundation.org/versions/204