IP packet handling issues in virtio-net on certain OS/kernel versions on KVM VM

Hi I am constantly using Suricata on KVM.
I installed a new OS to change the OS version of the VM, but Suricata encounters an unknown issue.

Problem:
When I configure the virtio-net interface as an af-packet pair and an IP packet is sent through that interface, no reply packet is returned to the client
In e1000 Ethernet, everything works normally.
There is a problem with IP packets in virtio-net, but ICMP works normally.

Environment
Host: Fedora 35 Kernel: 5.14.10-300.fc35.x86_64, qemu-6.1.0-10.fc35
Host: CentOS Stream 8 Kernel: 5.14.0-1.el8.elrepo.x86_64 qemu-6.0.0.-29.el8s
VM OS: Fedora 35, 36, Rocky9.0
Suricata: 6.0.6

Fedora 34 and Centos8 were normally used as Suricata VM OS. However, even if the same settings (apply all configs such as yaml) are applied in the OS where the problem occurs, IP packets do not work properly. Even after a lot of reinstallation, it still doesn’t work.
suricata.yaml (68.4 KB)
sysctl.rules (2.1 KB)

I don’t have an answer for you as I’ve never done this setup within KVM, but do have a few questions for more information that may help:

  • Which OS is your IPS VM? Fedora, or Rocky and if Fedora, which version? You list many, easier to focus in one one.
  • Which OS is your VM host machine? You list many, probably easier to focus in on one.

Your configuration file looks OK, so I’m thinking this might have more to do with the network setup of the hypervisor itself. Can you comment on how you do that?

I have done this type of setup with VirtualBox for testing. Where in my IPS VM I use either nat or bridged for the EXTERNAL interface, and then use a host-only adapter for the second interface which I consider INTERNAL. Then my protected VMs only have a host-only adapter. This gives a network scenario where the INTERNAL interface of the VM machine, and the network interfaces of the protected machines are on the same switch, without access to the internet unless the IPS machine bridges INTERNAL to EXTERNAL.

I’m not sure how I would do this on KVM. I have tried on Fedora as the VM host, but without a pre-configured interface like VirtualBox’s host-only, I haven’t had any success. Some custom network configuration is likely required on the VM host.

I document this is a bit here: VirtualBoxIpsTestSetup · jasonish/suricata Wiki · GitHub - you may also be interested in trying a more traditional Linux ethernet bridge between the 2 interfaces. That should tell you if your setup is correct without brining Suricata into the picture. This is also documented at the above link.

  1. IPS VM is Fedora 36. Host Machine is Fedora 35.

  2. Hypervisor configuration was performed using libvirtd, a VM management tool of KVM. The current configuration is the same as in Figure 1.

  3. The two interfaces used as af-packet pairs in Suricata VMs handle communication between the internal VMs, not the external internet, so only the internal interfaces are used.

  4. Result of packet response (SYN-ACK) according to vNIC from Suricata VM (client makes a request to HTTP server with curl and receives 200 OK response)

    linux bridge (Suricata Process Off)
    - virtio: Success
    - e1000: Sucess

    Suricata af-packet (linux bridge Off)
    - virtio: Fail
    - e1000: Sucess

The af-packet in Suricata VM configured with Fedora 36 on Hyper-V worked fine.
In addition, in very few cases, a packet response may come in virtio mode. However, there were a lot of retransmissions and it took a very long time to get a response. See Figure 3.


Fig. 1 Suricata VM Network Config


Fig. 2 VM/Network Diagram


Fig. 3 tcpdump capture from server

So it sounds like an issue with KVM and virtio? Probably worth further research to see if its something that can be supported.

I didn’t have much luck creating this myself. How did you define your test network? Specifically the one shared by the client and the Suricata VM?

The example attached above uses openvswitch, but the same problem occurs through switch, which is basically supported by kvm.
For a simple configuration example, I attach again.

test1 vSwitch

<network>
  <name>test1</name>
  <uuid>e06c0d8e-0fba-43e5-87bc-c5029aa8847e</uuid>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:04:41:2d'/>
</network>

test2 vSwitch

<network>
  <name>test2</name>
  <uuid>e06c0d8e-0fba-43e5-87bc-c5029aa8847a</uuid>
  <bridge name='virbr2' stp='on' delay='0'/>
  <mac address='52:54:00:04:41:2e'/>
</network>

image
Suricata VM Network Config (basic support switch)

Sometimes af-packet works in virtio-net environment, but most of them do not connect. ICMP packets are still returned, but packets for TCP and UDP are not. I don’t know why the server isn’t responding to packets sent via vitio-net/af-packet. I checked with wireshark for packets with no response, but there seems to be no problem with the structure. I can’t pinpoint the problem between suricata af-packet and virtio-net.

Unfortunately, I’m facing the same problem.
Suricata running in a VM on proxmox in af-packet mode.
I can choose between “Intel E1000”, “VirtIO (paravirtualized)”, Realtek RTL8139" and “VMware vmxnet3” network cards. None of them activates the bridge mode in a way that handshakes are possible.
Is there any further idea how to cope with this?

Kind regards,

I think, I got it working.
BOTH suricata VM AND host VM have to use E1000 as their network cards.
Plus: In the suricata.yaml, exception-policy has to be set to “ignore”.

Update: just noticed this post is very old… but well, maybe helpful for other people having trouble with VirtIO and KVM together with Suricata ^^

I had similar issues with VirtIO and KVM. I found a fix in this forum. One of the users was using a specific configuration within the libvirt XML of the virtual machine. I tried around with that one and found that checksums (in my case) seem to be the problem.

Can you give it a try @Jungho ? I am using OpenVswitch in my Host, you may need to change the bridge part. From my blog:

The following does not work:

    <interface type='bridge'>
      <mac address='..:..:..:..:..:..'/>
      <source bridge='ovs-guests'/>
      <virtualport type='openvswitch'>
      </virtualport>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

then it does not work. If I however define the NIC like this:

    <interface type='bridge'>
      <mac address='..:..:..:..:..:..'/>
      <source bridge='ovs-guests'/>
      <virtualport type='openvswitch'>
      </virtualport>
      <model type='virtio'/>
      <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off' queues='8' rx_queue_size='1024' tx_queue_size='1024'>
        <host csum='off' gso='off' tso4='off' tso6='off' ecn='off' ufo='off' mrg_rxbuf='off'/>
        <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

then it does work. Notice that the difference (driver). I found this one in the suricata forums in a thread about packetloss using XDP driver in RHEL 8.3… By trying and trying and trying I first noticed it starts working with this part:

        <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>

Because if I commented everything within driver…/driver it still did not work. When I commented the host/ part and uncommented the guest/ part it started to work.

It seems to have to do with the parameter csum in guest/ because as soon as I set csum=’on’ it stops working. I can set all other parameters from above to on – just not csum.

1 Like

Hello.
After applying the mentioned settings, all traffic between VMs was processed.

During the problem, I considered a number of settings, such as driver code in Virtio-net and other checksum settings in VM, but I don’t think I thought of setting up on Host.

It was a problem that was not handled for a long time, but I’m glad it was resolved. Thank you.
It will help many people who use Suricata in many KVM.