Using Suricata in IPS Mode with Napatech Card

Hi
I am using suricata with Napatech card by following the guidelines mentioned in
https://suricata.readthedocs.io/en/suricata-6.0.0/capture-hardware/napatech.html

Now I want to run it in IPS mode with Napatech Card
how to configure it , I am not able to find any guide or help from google
I can normally run suricata in IPS mode without napatech but I want with napatech card.

Can anyone help me in this , what commands or installation procedure should I follow for this?

Hi,

The Napatech cards are great but the Napatech packet capture logic in Suricata only supports IDS or NSM mode.

Hi
Any other way like including some 3rd party library between suricata and napatach card.

I’m not aware of anything that would provide IPS capability with Napatech nics.

IPS mode is supported for AF_PACKET, Netmap, NFQ, and on windows – windivert.

Hi got it
But at the end we need to integrate suricata with hardware device to increase performance . So in that case suricata can work only in IDS mode?

Suricata only supports IDS mode with the Napatech card.

Suricata has IPS support – but is reliant upon the packet capture mode for help with IPS functionality.

I suggest that you create a feature ticket for Suricata expressing the need for the Napatech card to support IPS mode. Napatech’s system engineers contributed the Napatech packet capture code to Suricata and they might consider adding support for IPS. Our ticketing system is at https://redmine.openinfosecfoundation.org/

Hi Jeff, thanks for your constant support
I was testing in IDS , I didn’t face any issue with 1-2 Gbps traffic but when I switched to 10 Gbps I am facing 90% drop . Did many chnages still not able to see any significant result . I am using 40 CPUs
Can you see my configuration .
af-packet.cluster-type: cluster_flow

threading:
  set-cpu-affinity: yes
  # Tune cpu affinity of threads. Each family of threads can be bound
  # to specific CPUs.
  #
  # These 2 apply to the all runmodes:
  # management-cpu-set is used for flow timeout handling, counters
  # worker-cpu-set is used for 'worker' threads
  #
  # Additionally, for autofp these apply:
  # receive-cpu-set is used for capture threads
  # verdict-cpu-set is used for IPS verdict threads
  #
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ 30,31,32,33,34,35,36,37,38,39,70,71,72,73,74,75,76,77,78,79,10,11,12,13,14,15,16,17,18,19,50,51,52,53,54,55,56,57,58,59 ]
        mode: "exclusive" 
        # Use explicitly 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        # threads: 3
        prio:
          low: [ 0 ]
          medium: [ "1-2" ]
          high: [ 30,31,32,33,34,35,36,37,38,39,70,71,72,73,74,75,76,77,78,79,10,11,12,13,14,15,16,17,18,19,50,51,52,53,54,55,56,57,58,59 ]
          default: "high" 
    #- verdict-cpu-set:
    #    cpu: [ 0 ]
    #    prio:
    #      default: "high" 
  #
  # By default Suricata creates one "detect" thread per available CPU/CPU core.
  # This setting allows controlling this behaviour. A ratio setting of 2 will
  # create 2 detect threads for each CPU/CPU core. So for a dual core CPU this
  # will result in 4 detect threads. If values below 1 are used, less threads
  # are created. So on a dual core CPU a setting of 0.5 results in 1 detect
  # thread being created. Regardless of the setting at a minimum 1 detect
  # thread will always be created.
  #
  detect-thread-ratio: 1.0

napatech:
    # When use_all_streams is set to "yes" the initialization code will query
    # the Napatech service for all configured streams and listen on all of them.
    # When set to "no" the streams config array will be used.
    #
    # This option necessitates running the appropriate NTPL commands to create
    # the desired streams prior to running Suricata.
    #use-all-streams: no

    # The streams to listen on when auto-config is disabled or when and threading
    # cpu-affinity is disabled.  This can be either:
    #   an individual stream (e.g. streams: [0])
    # or
    #   a range of streams (e.g. streams: ["0-3"])
    #
    streams: ["0-15"]

    # Stream stats can be enabled to provide fine grain packet and byte counters
    # for each thread/stream that is configured.
    #
    enable-stream-stats: no

    # When auto-config is enabled the streams will be created and assigned
    # automatically to the NUMA node where the thread resides.  If cpu-affinity
    # is enabled in the threading section.  Then the streams will be created
    # according to the number of worker threads specified in the worker-cpu-set.
    # Otherwise, the streams array is used to define the streams.
    #
    # This option is intended primarily to support legacy configurations.
    #
    # This option cannot be used simultaneously with either "use-all-streams" 
    # or "hardware-bypass".
    #
    auto-config: yes

    # Enable hardware level flow bypass.
    #
    hardware-bypass: no

    # Enable inline operation.  When enabled traffic arriving on a given port is
    # automatically forwarded out its peer port after analysis by Suricata.
    #
    inline: no

    # Ports indicates which Napatech ports are to be used in auto-config mode.
    # these are the port IDs of the ports that will be merged prior to the
    # traffic being distributed to the streams.
    #
    # When hardware-bypass is enabled the ports must be configured as a segment.
    # specify the port(s) on which upstream and downstream traffic will arrive.
    # This information is necessary for the hardware to properly process flows.
    #
    # When using a tap configuration one of the ports will receive inbound traffic
    # for the network and the other will receive outbound traffic. The two ports on a
    # given segment must reside on the same network adapter.
    #
    # When using a SPAN-port configuration the upstream and downstream traffic
    # arrives on a single port. This is configured by setting the two sides of the
    # segment to reference the same port.  (e.g. 0-0 to configure a SPAN port on
    # port 0).
    #
    # port segments are specified in the form:
    #    ports: [0-1,2-3,4-5,6-6,7-7]
    #
    # For legacy systems when hardware-bypass is disabled this can be specified in any
    # of the following ways:
    #
    #   a list of individual ports (e.g. ports: [0,1,2,3])
    #
    #   a range of ports (e.g. ports: [0-3])
    #
    #   "all" to indicate that all ports are to be merged together
    #   (e.g. ports: [all])
    #
    # This parameter has no effect if auto-config is disabled.
    #
    ports: [all]

    # When auto-config is enabled the hashmode specifies the algorithm for
    # determining to which stream a given packet is to be delivered.
    # This can be any valid Napatech NTPL hashmode command.
    #
    # The most common hashmode commands are:  hash2tuple, hash2tuplesorted,
    # hash5tuple, hash5tuplesorted and roundrobin.
    #
    # See Napatech NTPL documentation other hashmodes and details on their use.
    #
    # This parameter has no effect if auto-config is disabled.
    #
    hashmode: hash5tuplesorted

##

Thanks

A quick glance shows that you’re setting up 16 streams (0-15) but have more than 16 threads processing packets.

There may be some other issues but you can check

  • Threads assigned to workers are on cores not shared with other processes (hint: see isolcpus)
  • Ensure that the buffers used for the napatech streams are large enough (these are usually in the napatech configuration – outside of Suricata)