Suricata Performance tuning

Hi everyone,

I have built a sensor on some Dell hardware with decent system capacity, it has dual gold XEON gpu and 377GB of ram and a Silicom 40Gbps capture card FPGA using PF_RING FPGA license from NTOP and Suricata 5.0.2 compiled from source.

Suricata is connected to a Packet broker and is listening to an aggregate feed of traffic. It seems approximately 5Gbps. I have configured the below config in suricata.yaml and it seems to be multithreaded and working nicely.

PF_RING configuration. for use with native PF_RING support
for more info see http://www.ntop.org/products/pf_ring/

pfring:
  - interface: fbcard:0:a:0
    Number of receive threads. If set to 'auto' Suricata will first try
    o use CPU (core) count and otherwise RSS queue count.
    '#'threads: 24

    Default clusterid.  PF_RING will load balance packets based on flow.
    All threads/processes that will participate need to have the same
    clusterid.
    '#'cluster-id: 99

    Default PF_RING cluster type. PF_RING can load balance per flow.
    Possible values are cluster_flow or cluster_round_robin.
    '#'cluster-type: cluster_flow

    bpf filter for this interface
    '#'bpf-filter: tcp

    If bypass is set then the PF_RING hw bypass is activated, when supported
    by the interface in use. Suricata will instruct the interface to bypass
    all future packets for a flow that need to be bypassed.
    #bypass: yes

    Choose checksum verification mode for the interface. At the moment
    of the capture, some packets may be with an invalid checksum due to
    offloading to the network card of the checksum computation.
    Possible values are:
     '#'- rxonly: only compute checksum for packets received by network card.
     '#' - yes: checksum validation is forced
     '#' - no: checksum validation is disabled
     '#' - auto: Suricata uses a statistical approach to detect when
      checksum off-loading is used. (default)
     Warning: 'checksum-validation' must be set to yes to have any validation
    checksum-checks: auto
   Second interface
  '#'- interface: eth1
  threads: 3
  '#'  cluster-id: 93
  '#'  cluster-type: cluster_flow
   Put default values here
'#'  - interface: fbcard:0:a:0
'#'    threads: 2

Suricata is multi-threaded. Here the threading can be influenced.
threading:
  set-cpu-affinity: yes
  Tune cpu affinity of threads. Each family of threads can be bound
  on specific CPUs.
  
  These 2 apply to the all runmodes:
  management-cpu-set is used for flow timeout handling, counters
  worker-cpu-set is used for 'worker' threads
  
  Additionally, for autofp these apply:
  receive-cpu-set is used for capture threads
  verdict-cpu-set is used for IPS verdict threads
  
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ 40 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:

The problem i have is when i view my ring buffer on the card is set to 128GB of capacity but about 5 minutes into starting the capturing process i can see a large number of discards counting.

Speaking with the Vendor of the card they are saying i can increase the size of the buffer but ultimately its the application not keeping up with the ingestion rate so i need to optimize it.

As you can see from the above config that is how i have configured Suricata relating to the FPGA card using PF_RING (cluster-id commented out etc due to it crashing) and CPU-affinity settings, Does this look right and should i be making tweaks elsewhere in order for Suricata to keep up?

I am using the emerging threats pro ruleset.

Happy to provide further config or ./configure parameters. Where can i look in logs or stats file to identify potential problems with Suricata processing the load of traffic? I can setup some filters on the broker to slice some traffic if needed to reduce load. I would be expecting based on the hardware spec of the card to be able to capture around 30Gbps based on testing i have done at the card layer.

i used ‘#’ to represent the portion of config actually commented out, otherwise markdown formats it nice and large for me on here :slight_smile:

Edit: I thought id add the system output also checking the status of the daemon
the ioctl check is standard as linux cannot identify the interface im using correctly as its a pf_ring FPGA “fbcard:0:a:0” interface. the rest of the output is related to the commented out sections. When they are uncommented Suricata crashes

 - suricata.service - Suricata Intrusion Detection Service
   Loaded: loaded (/etc/systemd/system/suricata.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-03-29 22:28:04 UTC; 32min ago
  Process: 29196 ExecStartPre=/bin/rm -f /var/run/suricata.pid (code=exited, status=0/SUCCESS)
 Main PID: 29203 (Suricata-Main)
    Tasks: 55
   Memory: 1.8G
      CPU: 1h 58min 48.622s
   CGroup: /system.slice/suricata.service
           └─29203 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --pfring

- Mar 29 22:28:04 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:04 - (suricata.c:1084) <Notice> (LogVersion) -- This is Suricata version 5.0.2 RELEASE running in SYSTEM mode
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
- Mar 29 22:28:44 sensor01 suricata[29203]: [29238] 29/3/2020 -- 22:28:44 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
- Mar 29 22:28:44 sensor01 suricata[29203]: [29203] 29/3/2020 -- 22:28:44 - (tm-threads.c:2170) <Notice> (TmThreadWaitOnThreadInit) -- all 49 packet processing threads, 4 management threads initialized, engine started.

Cheers,
Nathan

I have two suggestions to begin with though I am not an expert in terms of FPGA and Silicom.

Looking at the settings of the buffers (128GB) it seems they are rather big for 5gbs traffic inspection.

It looks to me your CPU affinity is not set appropriately.(For example you have only one - CPU 0 dedicated to management, that seems very low.) You can have a look here for some guidance of how to set it up especially in terms of NUMA allocation.

You can also start suricata without rules (-S /dev/null ) and see if the buffers are filled up again.

What are the specific PFRING kernel module settings you use for the NIC ?

Thanks so much for your reply, i have been troubleshooting other ring buffer nuances for the past little while so apologies about my delayed response

So taking a look into the CPU affinity set up a litter better and to give some context, i have a number of sensors performing similar functions but Suricata seems to be the only one consistently “discarding packets” , Discarded packets can be represented in a number of ways i have discovered, if frames received are above a certain frame length ie 1518 > then it can match a filter and be discarded. but that is not occuring in this case, all frames recieved in the buffer are good and no errors on ingestion are detected. So the discard counter are due to the buffer filling up likely due to threads used by Suricata arent optimally configured

When looking at htop i can see Suricata is multi-threading but there is one thread that is max’d out pretty consistently so this would likely be the cpu affinity config you referred to before.

So would someone mind telling me if my setup looks better based on the below output


 lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                48
On-line CPU(s) list:   0-47
Thread(s) per core:    2
Core(s) per socket:    12
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz
Stepping:              4
CPU MHz:               3600.009
BogoMIPS:              6001.48
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              25344K
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

As you can see i have dual CPU with one FPGA card on NUMA node 1, As my NUMA node is comma separated am i able to identify which threads to run on such as below?

 cpu-affinity:
        prio:
          low: [ 0 ]
          medium: [ "0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46" ]
          high: [ "1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47" ]
          default: "medium"

Where the example on the website its displayed as a range


  threading:
 cpu-affinity:
   - management-cpu-set:
       cpu: [ "1-10" ]  # include only these CPUs in affinity settings
   - receive-cpu-set:
       cpu: [ "0-10" ]  # include only these CPUs in affinity settings
   - worker-cpu-set:
       cpu: [ "18-35", "54-71" ]
       mode: "exclusive"
       prio:
         low: [ 0 ]
         medium: [ "1" ]
         high: [ "18-35","54-71" ]
         default: "high"

If i represent the threads available on NUMA Node1 1-25,27-47 for example wouldnt that cross over to
NUMA node0?

Thanks for any assistance in advance.

Nathan

I like this machine …48 CPUs @ 3GHz :slight_smile:

I think you should try the following. Lets start with 20 threads and see how it goes.
So make sure you have 20 RSS queues on the NIC set (docs showed how for Intel anyway) Then in your CPU affinity section make sure you list 20 CPUs for the workers :
NOTE: adjust your cu numbers accordingly below so they are all from the same NUMA as the NIC
for both management and the worker CPU. (reference example below)

  cpu-affinity:
    - management-cpu-set:
        cpu: [ "91","93","95","97","99","99","101","103","105","107","109","111" ]  # include only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ "0-10" ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "2","4","6","8","10","12","14","16","18","20","22","24","26","28","30","32","34","36","38","40","42","44","46","48","50","52","54","56" ]
        mode: "exclusive"
        # Use explicitely 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        # threads: 3
        prio:
          low: [ 0 ]
          medium: [ "1" ]
          high: [ "2-40" ]
          default: "high" 

Then in the pfring section in yaml provide for 20 worker threads.
Think that way it should be much better hopefully.

Separate comment:
Good to know that discards mean lots of things not just drops due to the application. Do you mind listing the model and driver version of your NIC and listing the cases that can be discards?

This is what i have put in place

  threading:
 cpu-affinity:
   - management-cpu-set:
       cpu: [ "1-10" ]  # include only these CPUs in affinity settings
   - receive-cpu-set:
       cpu: [ "0-10" ]  # include only these CPUs in affinity settings
   - worker-cpu-set:
       cpu: [ "11-25", "26-47" ]
       mode: "exclusive"
       prio:
         low: [ 0 ]
         medium: [ "1" ]
         high: [  "11-25", "26-47" ]
         default: "high"

I also changed the detect-thread-ratio to 2.0 , It seems to be running well, should i change this to something higher or lower?

I guess the one thing im not sure on is the PF_RING component within Suricata confing

pfring:
  - interface: fbcard:0:a:0
#    threads: 1

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
   # cluster-id: 99

    # Default PF_RING cluster type. PF_RING can load balance per flow.
    # Possible values are cluster_flow or cluster_round_robin.
   # cluster-type: cluster_flow

    # bpf filter for this interface
    # bpf-filter: tcp

    # If bypass is set then the PF_RING hw bypass is activated, when supported
    # by the interface in use. Suricata will instruct the interface to bypass
    # all future packets for a flow that need to be bypassed.
    # bypass: yes

    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - rxonly: only compute checksum for packets received by network card.
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: Suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: 'checksum-validation' must be set to yes to have any validation
    #checksum-checks: auto
  # Second interface
  #- interface: eth1
  #  threads: 3
  #  cluster-id: 93
  #  cluster-type: cluster_flow
  # Put default values here
#  - interface: default
    #threads: 2

Any time i uncomment one of the options such as threads etc or cluster-ID it crashes

I have PF_RING with FPGA license from NTOP as im using the FPGA card, does af-packet communicate with FPGA cards or primarily libpcap interfaces? i can access my card via libpcap but im assuming thats slower than my current configuration

Thanks,

Ah we replied at the same time. Thanks!

Ok so my setup didnt work, it had taken a little longer to get to capacity but i did eventually begin discarding packets due to full ring buffer.

I am putting in place your configuration now to test out. Will report back

Yep no problems ill get the list of the card and the states it can be in.

So here is some output from the Silicom fbc2CGF3-2x40G FPGA card using its drivers to disaply accumulated live statistics. Driver version is 3.6.8.1 and a .bit file also but cannot remember that version

    ------------ fbcard0 Port 1 Accumulated -------------

Start: 2020-04-07 06:01:10   Period: 48s
                                                     Rx                  Tx
OK Frames                     OKF              55630222                   0
OK Bytes                      OKB           18733521279                   0
Error Frames                  ERF                     0                   0
Error Bytes                   ERB                     0                   0
Size [   1,   63]             UN                      0                   0
Size = 64                     64                      0                   0
Size [  65,  127]             65               41153232                   0
Size [ 128,  255]             128               3090043                   0
Size [ 256,  511]             256               1213243                   0
Size [ 512, 1023]             512               1172282                   0
Size [1024, 1518]             1K                5306963                   0
Size >= 1519                  1K5A              3694459                   0
Size [1519, 2047]             1K5               3694459                   0
Size [2048, 4095]             2K                      0                   0
Size [4096, 8191]             4K                      0                   0
Size [8192, 9022]             8K                      0                   0
Size >= 9023                  9K                      0                   0
Broadcast                     BRC                  1464                   0
Multicast                     MUL                  7154                   0
Runts                         RUN                     0                   -
Jabbers                       JAB                     0                   -
Oversize                      OVS               3694459                   -
Truncated Frames              TRU                     0                   -
Discarded Undersized          UND                     0                   -
Discarded Frames              DIS                     0                   -
Overflow Frames               OVF                     0                   0
Fragments                     FRA                     0                   -
Collisions                    COL                     0                   -
Drop Events                   DRE                     0                   -
OK bps incl. FCS and IFG      BPS            3123224656                   0
OK Mbps incl. FCS and IFG     MBPS                 3123                   0

As you can see there are a number of parameters that the card rcv traffic on. I am accepting everything in my configuration thats applied to the card. The overflow counter can accumulate as its measured off anything greater than the default frame size of 1518 but the field below we do capture frames larger than this "Size >= 1519 " and so those values are tracked via the overflow.

If i were to put in place a filter for any of these values, or say deduplication across the two ports or deduplication of duplicate frames recieved on the same port it would count toward the discard counter. Also a full ring buffer which in my case is 128G in size, i have tried increasing this due to my host memory being at 377GB of ram but i decided id focus more on the application side to ensure im doing everything i could before playing with huge page configuration again :smile:

The card config is like this at the moment

prbGroup"a"
{
    noPrbs 1
    hash HashSessionIP
    nonBlocking true
}

NoPrbs = Number of buffers in this group
NonBlocking boolean - Means no Prb in the group can block other Prbs on the board.
HashSessionIP - Ensure that all traffic belonging to an IPv4/IPv6 TCP or UDP session is distributed into the same PRB

So due to some of the functionality that Suricata can provide i can achieve within the FPGA card i guess PF_RING is just invoked to use “fbcard:0:a:0” interface and the logic on the card that pfring accesses is enough ? to be honest im not really sure about this part of the config, obviously its not perfect as im still discarding packets at 5Gbps :laughing:

Cheers,
Nathan

Yep still discarding packets due to buffer being full.

I think you can use either or - pfring or afpack. You just need to choose one and stick with it for the first round of test i think. Also use that respective section in yaml (af-packet or pfring)

With respect to the yaml config - you can not put
cpu: [ "11-25", "26-47" ]
due to the NUMA config shown in lscpu

NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

The setup in config needs to be similar as in my previous reply (reference).

worker-cpu-set:
cpu: [ “2”,“4”,“6”,“8”,“10”,“12”,“14”,“16”,"…
(20 threads/cpus from the same NUMA node where the card is - so change those numbers accordingly)

I now have the below settings in place

cpu-affinity:
    - management-cpu-set:
        cpu: [ "0","2","4","6","8","10","12","14","16","18","20","22","24","26","28","30","32","34","36","38","40","42","44","46" ]  # only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ "0","2","4","6","8","10","12","14","16","18","20","22","24","26","28","30","32","34","36","38","40","42","44","46" ]  # inc CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "1","3","5","7","9","11","13","15","17","19","21","23","25","27","29","31","33","35","37","39","41","43","45","47" ]
        mode: "exclusive"
        # Use explicitely 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        # threads: 3
        prio:
          low: [ 0 ]
          medium: [ "1" ]
          high: [ "2-47" ]
          default: "high"

It doesnt seem to be a receive problem, its either Workers not processing it enough or management as it still discards packets from the buffer within 5 minutes and this is at 3Gbps today. my stats.log isn’t writing on this sensor at the moment. What sort of Suricata runtime profiling could i enable that may give insights to the area it might be operating a little slow? Ive compiled this from source so happy to replicate on my dev server first if recompilation is needed.

Receive set is not relevant in workers mode.
I think you have put a management on one NUMA node and the workers on another. Both should be on the same NUMA node for this test - the same NUMA node as the NIC.

I would suggest giving management 6 threads to begin with and putting the worker sequence after those 6. What is important with pfring is that the number of threads fits in the worker set defined in the cpu affinity.

Ok, taking your most recent response onboard i have put the below config in place

  cpu-affinity:
    - management-cpu-set:
        cpu: [ "1","3","5","7","9","11" ]
    - receive-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "13","15","17","19","21","23","25","27","29","31","33","35","335","37","39","41","43","45","47" ]
        mode: "exclusive"
        # Use explicitely 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        # threads: 3
        prio:
          low: [ 0 ]
          medium: [ "1" ]
          high: [ "13-47" ]
          default: "high"

As of right now Pfring: is conly configured with the interface defined as fbcard:0:a:0 no threads are set. as when i define a number of threads and a cluster-ID it crashes “pfring_set_cluster returned -7 cluster-id: 99”

by having pfring configured the way i have it defined in suricata.yaml, it seems pf_ring is being used but certainly not to its maximum potential surely.

Cheers,
Nathan

Can you post a full output when you start suricata with the verbose switch
suricata -vvv
?

Can you share your suricata.yaml pfring section as well please?

yes i will soon, i have added in extra ring buffers on the card instead of just one large one. also added the same amount of pfring interfaces defined in suricata.yaml with 1 thread per interface. ill post all of this shortly. I did this whilst discussing it with Alfredo from Ntop.

Cheers,
Nathan

I have obtained the systemd output from suricata starting, the majority of the log is regarding the ioctl output which is no concern to me as fiberblaze isnt managed via standard linux utils. The line about “SC_ERR_STATS_LOG_GENERIC(278)] - eve.stats” isnt true either as Stats are enabled in the yaml file but for some reason i havent got to the bottom of yet is why its not enabled correctly, its identical to my other suricata sensor in terms of yaml configuration and that other sensor is logging correctly.

But anyway the behaviour at the moment is that fbcard:0:a:0 is the first ring buffer and its still discarding loads of packets.

Im looking into the hashing properties of the card also

Apr 14 23:21:35 sensor01 systemd[1]: Starting Suricata Intrusion Detection Service...
Apr 14 23:21:35 sensor01 systemd[1]: Started Suricata Intrusion Detection Service.
Apr 14 23:21:35 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:21:35 - (suricata.c:1084) <Notice> (LogVersion) -- This is Suricata version 5.0.2 RELEASE running in SYSTEM mode
Apr 14 23:21:35 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:21:35 - (output-json-stats.c:467) <Error> (OutputStatsLogInitSub) -- [ERRCODE: SC_ERR_STATS_LOG_GENERIC(278)] - eve.stats: stats are disabled globally: set stats.enabled to true. See https://suricata.
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:0': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3065] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3065] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:1': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:1': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:1': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:1': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:1': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3079] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3079] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:2': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:2': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:2': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:2': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:2': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3092] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3092] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:3': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:3': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:3': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:3': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:3': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3105] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3105] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:4': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:4': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:4': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:4': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:4': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3118] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3118] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:5': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:5': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:5': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:5': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:5': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3131] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3131] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:6': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:6': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:6': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:6': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:6': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3144] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3144] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:7': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:7': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:7': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:7': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:7': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3157] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3157] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:8': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:8': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:8': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:8': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:8': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3170] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3170] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:284) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - Could not get cluster-id from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (runmode-pfring.c:332) <Error> (ParsePfringConfig) -- [ERRCODE: SC_ERR_GET_CLUSTER_TYPE_FAILED(35)] - Could not get cluster-type from config
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:9': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:9': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:9': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:9': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:15 - (util-ioctl.c:296) <Warning> (GetEthtoolValue) -- [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'fbcard:0:a:9': No such device (19)
Apr 14 23:22:15 sensor01 suricata[3016]: [3183] 14/4/2020 -- 23:22:15 - (tm-threads.c:421) <Notice> (TmThreadsSlotPktAcqLoopAFL) -- AFL mode starting
Apr 14 23:22:15 sensor01 suricata[3016]: [3183] 14/4/2020 -- 23:22:15 - (source-pfring.c:586) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1
Apr 14 23:22:16 sensor01 suricata[3016]: [3016] 14/4/2020 -- 23:22:16 - (tm-threads.c:2170) <Notice> (TmThreadWaitOnThreadInit) -- all 106 packet processing threads, 0 management threads initialized, engine started.


# PF_RING configuration. for use with native PF_RING support
# for more info see http://www.ntop.org/products/pf_ring/
pfring:
  - interface: fbcard:0:a:0
    threads: 1
  - interface: fbcard:0:a:1
    threads: 1
  - interface: fbcard:0:a:2
    threads: 1
  - interface: fbcard:0:a:3
    threads: 1
  - interface: fbcard:0:a:4
    threads: 1
  - interface: fbcard:0:a:5
    threads: 1
  - interface: fbcard:0:a:6
    threads: 1
  - interface: fbcard:0:a:7
    threads: 1
  - interface: fbcard:0:a:8
    threads: 1
  - interface: fbcard:0:a:9
    threads: 1

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
 #   cluster-id: 99

    # Default PF_RING cluster type. PF_RING can load balance per flow.
    # Possible values are cluster_flow or cluster_round_robin.
   # cluster-type: cluster_flow

    # bpf filter for this interface
    # bpf-filter: tcp

    # If bypass is set then the PF_RING hw bypass is activated, when supported
    # by the interface in use. Suricata will instruct the interface to bypass

It seems the pfring set up is not correct. Maybe the ntop folks would be able to help you out and validate the config on the yaml side? Please feel free to CC me if needed for helping out. I have not set up a ZC license in a while.

We could try af-packet in the meanwhile if you want?
What is the output of

ethtool -l fbcard
ethtool -n fbcard
ethtool -x fbcard

?

Hello,

Did you figure out your problem? The one thing I see is you only have a single PRB configured in fbcard.cfg -

prbGroup"a"
{
noPrbs 1
hash HashSessionIP
nonBlocking true
}

If you want 20 rings I believe you should have noPrbs 20 - this is how it works with pf_ring on Zeek

I am personally implementing Suricata with pf_ring and a fiberblaze FGPA tomorrow, I will know for sure when I do if that is the case, and it seems it should be.

Also ncardstat is your friend - use that to see if every ring is being used, or if only a single ring is being used. Here is an example of pf_ring and zeek

On-board buffer filling: 0%

ID Name Bypass Size MemPool Numa Use Fill. Peak-f Discards (filling) Rx (packets) Rx (bytes)
0 a00 - 512 MB 0 1 1 0% 99% 0 5902140721 4842323166366
1 a01 - 512 MB 0 1 1 0% 99% 0 4561103721 3986201827433
2 a02 - 512 MB 0 1 1 0% 99% 0 4464720667 3795236307093
3 a03 - 512 MB 0 1 1 0% 99% 0 4787380615 4170233915885
4 a04 - 512 MB 0 1 1 0% 99% 0 4483590558 3884301464322
5 a05 - 512 MB 0 1 1 0% 99% 0 4780864761 4271485256961
6 a06 - 512 MB 0 1 1 0% 99% 0 4439150678 3774994326768
7 a07 - 512 MB 0 1 1 0% 99% 0 4938570272 4065083634560
8 a16 - 512 MB 0 1 1 0% 99% 0 4457754086 3811866754573
9 a17 - 512 MB 0 1 1 0% 99% 0 4389385084 3836929552448
10 a18 - 512 MB 0 1 1 0% 99% 0 4397967190 3803609859239
11 a19 - 512 MB 0 1 1 0% 99% 0 4756913104 4117070809745
12 a20 - 512 MB 0 1 1 0% 99% 0 4422294844 3813974563384
13 a21 - 512 MB 0 1 1 0% 99% 0 4906860211 3843668483952
14 a22 - 512 MB 0 1 1 0% 99% 0 4581310213 3812182095733
15 a23 - 512 MB 0 1 1 0% 99% 0 4677719041 3984549044140
16 b00 - 14336 MB 0 1 1 0% 41% 0 133394665368 111597381863488
32 a08 - 512 MB 0 1 1 0% 99% 0 5286082099 4693146728827
33 a09 - 512 MB 0 1 1 0% 99% 0 6216926384 5023718253662
34 a10 - 512 MB 0 1 1 0% 99% 0 5797965030 4392855673575
35 a11 - 512 MB 0 1 1 0% 99% 0 5694982434 4734618750624
36 a12 - 512 MB 0 1 1 0% 99% 0 4850512618 3981306804222
37 a13 - 512 MB 0 1 1 0% 99% 0 4839672991 4122627335704
38 a14 - 512 MB 0 1 1 0% 99% 0 4849656833 4158175651482
39 a15 - 512 MB 0 1 1 0% 99% 0 4703344413 4089310753030
40 a24 - 512 MB 0 1 1 0% 99% 0 4473185086 3861346853703
41 a25 - 512 MB 0 1 1 0% 99% 0 4760129330 3934230633044
42 a26 - 512 MB 0 1 1 0% 99% 0 5796262698 4565275435015
43 a27 - 512 MB 0 1 1 0% 99% 0 4399941382 3873296760320

Greg

Good news - I have solved this issue! A clue to your problem was indeed the PRB configuration among other things. Too much to go over here since nobody seems interested anyway.

Anyone interested in the solution can contact me by pinging this thread I guess, I suppose you can contact me by logging in to your Suricata account too.

Bad news is once this is solved, you will not believe the avalanche of alerts generated, in excess of 7449 alerts per second with 52422 rules loaded.

Great job Suricata developers, what an excellent IDS!

@greg - what was the solution?

1 Like