Disabling http analysis

Hi,

The environment I use suricata has a lot of http traffic.
So, it is judged that there is a problem with performance.
If I turn off http analysis, I want to know what side effects there will be.
(app-layer> http> enabled: no)

For example, can’t I use http-related options in snort rules?
(eg: http_method, http_uri…)
I wonder what other influences will be.

Regards

It will mean that all the rules that use alert http ... or any http_* rule keyword will be ineffective. It will also disable file extraction for http. I would recommend looking at improving performance in other ways, like more tuning, using better hardware, etc.

Hi, Victor

In my case, Suricata operates in about 20Gbps environment. Kernel drop occurs when the http analysis option is enabled. It seems to be affecting the performance a lot. Are there any options that may create a load during the settings below?

Regards

http:
enabled: yes
libhtp:
default-config:
personality: IDS

       request-body-limit: 100kb
       response-body-limit: 100kb

       request-body-minimal-inspect-size: 32kb
       request-body-inspect-window: 4kb
       response-body-minimal-inspect-size: 40kb
       response-body-inspect-window: 16kb

       response-body-decompress-layer-limit: 2
       http-body-inline: auto

       swf-decompression:
         enabled: yes
         type: both
         compress-depth: 0
         decompress-depth: 0

       # decoding
       double-decode-path: no
       double-decode-query: no

If that has an impact on performance and the drops are too high, can you tell us which version you’re running, how with what config and on which hardware?

Hi, Andreas

I am running suricata 5.0.1 version. It is used in IDS mode (sniffing), and only RX traffic is used. And I am using emerging-threat rule 4.0.
I’m using a napatech 10G x 4 card, and I’m using a 2.5GHz x 20 x 2 cpu (using hyperthreading totally 80 cores) and 512GB of memory.
Is there any additional information I need to provide?

Regards,

First recommendations: upgrade to 5.0.3 and switch to using the 5.0 variant of the ET ruleset.

Since this is a multi-NUMA system it might also be interesting to try out 6.0.0-rc1, as this has some improvements for such setups.

How are the Napatech host buffers configured? Size and how many?

With the IDS, you should not have any TX buffers configured.

How is the default.ntpl file setup?
How many Napatech streams?
How is affinity setup?

Hi, Jeff

I haven’t tried using Suricata 6.0 yet.

My HostBuffersRx and TX settings are as follows.

HostBuffersRx = [32,1024,0],[32,1024,1]
HostBuffersTx = [0,16,-1]

I am using 64 streams from napatech.

Assign[priority=0; streamid=(8…71)]= all

In order to use a program (similar to tcpdump) that stores packets intermittently not continuously for collecting evidence, 64 streams are available for suricata because the program also uses napatech api and the maximum number of hostbuffer is 128

The cpu-affinity is also used the same as the number of streams.

   cpu-affinity:
     -management-cpu-set:
         cpu: ["6-7", "72-73"]
         mode: "balanced"
     -worker-cpu-set:
         cpu: ["8-71"]
         mode: "exclusive"
         prio:
           medium: ["6-7","72-73"]
           high: ["8-71"]
           default: "high"

Regards,

This should be a set of stream ids – it looks like you’re using core numbers instead?

It is just for convenience.
It is just that the stream id matches the core number.

For example:

Delete=All
#Setup[numaNode=0] = streamid==0
#Setup[numaNode=0] = streamid==1
#Setup[numaNode=0] = streamid==2
#Setup[numaNode=0] = streamid==3
#Setup[numaNode=0] = streamid==4
#Setup[numaNode=0] = streamid==5
#Setup[numaNode=0] = streamid==6
#Setup[numaNode=0] = streamid==7
Setup[numaNode=0] = streamid==8
Setup[numaNode=0] = streamid==9
Setup[numaNode=0] = streamid==10
Setup[numaNode=0] = streamid==11

I’m assume you have something like (the actual values depend on your system’s numa layout shown with lscpu).

Setup[NUMANode=0] = StreamId==(8..40)
Setup[NUMANode=1] = StreamId==(41..71)

This is ancillary to the main issue you reported, however.

You can use the Napatech profiling utility to display the streams – for each stream, the byte- and packets-received, and the drop count is displayed.

It’s possible there are some high-speed elephant flows on a few streams.

Not all CPUs occupy 100%, and not all streams make packet drop. As you said, a half of streams continuously make packet drop. Are there any settings that might be causing it?

The napatech card is distributed in 5-tuple-hash, but is there a reason that only a specific stream continues to drop?
(There is no memcap for http analysis)

You can use the Napatech capture tool to see what traffic is on the streams where the drops are occurring. You might find high speed elephant flows (as one example) on the stream(s) where the drops occur.

Suricata 6.0 (to be released this week), contains support for bypassing flows when using the Napatech card: https://suricata.readthedocs.io/en/suricata-6.0.0-rc1/capture-hardware/napatech.html#bypassing-flows

The bypass functionality could be helpful if the flow(s) causing the drops are known to be free of malicious traffic (e.g., an internal tool is generating the traffic, etc).

@Jeff_Lucovsky
I am now using Suricata 6.0.1 and a napatech network accelerator card. However, there is a constant drop (stats.napa_total.overflow_drop) occurring.

I create 64 napatech streams, and use the suricata to map the cpu with the cpu-affinity setting. Looking at the cpu status, even if a drop occurs, not all cpu loads are 100%, and only one or two specific cpu continuously occupies 100%.

Likewise, when I look at napatech’s profile, drop occurs only in the streams mapped with the aforementioned 1~2 cpu. I think in this case, it is not a drop caused by a lot of traffic overall.

What are some possible causes?
I tried to check the Top IP by packet dumping the stream, but it is not easy. hardware-bypass is not enabled. Maybe invalid packets are dropped?

Also, I would like to know the source of the drop information that suricata shows. If the information is provided by napatech API, does it mean that dropped packets are only discarded from the stream buffer due to the slow processing speed of suricata? Isn’t it possible to see information that is dropped because suricata thinks it is an invalid packet?

To recap

  • 2 CPUs, each with 20 physical cores. 80 when hyperthreading is enabled
  • Napatech 4x10G card
  • 64 Napatech streams
  • 1GB hostbuffer for each stream
  • Cores assigned to Suricata: 8-71
  • Napatech streams used: 8-71

Does the Napatech profiling utility show streams 0-7?
Can you post your Napatech default.ntpl file?
What firmware is loaded on the card?
Are you using the Napatech “flow matcher” (bypass or shunting) feature?

  1. It dosen’t show streams 0-7.
    image

  2. And I run ntpl with below configuration file.

    Delete=All
    Setup[numaNode=0] = streamid==(8…19)
    Setup[numaNode=1] = streamid==(20…39)
    Setup[numaNode=0] = streamid==(40…59)
    Setup[numaNode=1] = streamid==(60…71)
    HashMode[priority=4]=Hash5TupleSorted
    Assign[priority=0; streamid=(8…71)]= all

  3. The firmware version is : (productinfo (v. 3.18.0.34-faf32))

  4. I don’t use “flow matcher”

If only a few of the 64 streams are busy, there may be some high speed elephant flows … you can use the capture (napatech utility) to sample the packets from the associated streams to determine traffic/flow mix.