No traffic in Hyper-v environment

Hello Suricata community,
I am mastering the Suricata, I made a test bench to check traffic to the test computer. I use the Hyper-v hypervisor for all test hosts. Configured Suricata 6.0.1 (Ubuntu 20.04) in AF_PACKET mode to inspect traffic from host to main network. Hardware acceleration on v-ethernet card Hyper-V off mode. Offload lan on interface Suricata off. Faced a problem, traffic does not pass, the host does not receive an address via dhcp (static IP also does not work), although it sees Suricata’s traffic in the log.

изображение

Config Suricata AF_PACKET

Linux high speed capture support

af-packet:

  • interface: eth0
    threads: 1
    defrag: no
    cluster-type: cluster_flow
    cluster-id: 98
    copy-mode: ips
    copy-iface: eth1
    buffer-size: 64535
    use-mmap: yes
  • interface: eth1
    threads: 1
    cluster-id: 97
    defrag: no
    cluster-type: cluster_flow
    copy-mode: ips
    copy-iface: eth0
    buffer-size: 64535
    use-mmap: yes
stats.log

[spoiler]

Counter | TM Name | Value

capture.kernel_packets | Total | 39810
decoder.pkts | Total | 39810
decoder.bytes | Total | 5678293
decoder.ipv4 | Total | 22306
decoder.ipv6 | Total | 105
decoder.ethernet | Total | 39810
decoder.udp | Total | 22360
decoder.icmpv6 | Total | 38
decoder.avg_pkt_size | Total | 142
decoder.max_pkt_size | Total | 499
flow.udp | Total | 7927
flow.icmpv6 | Total | 24
flow.wrk.spare_sync_avg | Total | 100
flow.wrk.spare_sync | Total | 65
decoder.event.ipv4.opt_pad_required | Total | 13
decoder.event.ipv6.zero_len_padn | Total | 13
flow.wrk.flows_evicted | Total | 1554
app_layer.flow.dhcp | Total | 103
app_layer.tx.dhcp | Total | 429
app_layer.flow.failed_udp | Total | 7824
flow.mgr.full_hash_pass | Total | 79
flow.spare | Total | 9844
flow.mgr.rows_maxlen | Total | 2
flow.mgr.flows_checked | Total | 7497
flow.mgr.flows_notimeout | Total | 1153
flow.mgr.flows_timeout | Total | 6344
flow.mgr.flows_evicted | Total | 6344
tcp.memuse | Total | 1146880
tcp.reassembly_memuse | Total | 196608
flow.memuse | Total | 7394304

suricata.log

16/2/2021 – 15:21:52 - - This is Suricata version 6.0.1 RELEASE running in SYSTEM mode
16/2/2021 – 15:21:52 - - CPUs/cores online: 4
16/2/2021 – 15:21:52 - - Adding interface eth0 from config file
16/2/2021 – 15:21:52 - - Adding interface eth1 from config file
16/2/2021 – 15:21:52 - - luajit states preallocated: 128
16/2/2021 – 15:21:52 - - SSSE3 support not detected, disabling Hyperscan for MPM
16/2/2021 – 15:21:52 - - SSSE3 support not detected, disabling Hyperscan for SPM
16/2/2021 – 15:21:52 - - ‘default’ server has ‘request-body-minimal-inspect-size’ set to 31167 and ‘request-body-inspect-window’ set to 3961 after randomization.
16/2/2021 – 15:21:52 - - ‘default’ server has ‘response-body-minimal-inspect-size’ set to 40377 and ‘response-body-inspect-window’ set to 17202 after randomization.
16/2/2021 – 15:21:52 - - SMB stream depth: 0
16/2/2021 – 15:21:52 - - Protocol detection and parser disabled for modbus protocol.
16/2/2021 – 15:21:52 - - Protocol detection and parser disabled for enip protocol.
16/2/2021 – 15:21:52 - - Protocol detection and parser disabled for DNP3.
16/2/2021 – 15:21:52 - - Found an MTU of 1500 for ‘eth0’
16/2/2021 – 15:21:52 - - Found an MTU of 1500 for ‘eth0’
16/2/2021 – 15:21:52 - - Found an MTU of 1500 for ‘eth1’
16/2/2021 – 15:21:52 - - Found an MTU of 1500 for ‘eth1’
16/2/2021 – 15:21:52 - - allocated 262144 bytes of memory for the host hash… 4096 buckets of size 64
16/2/2021 – 15:21:52 - - preallocated 1000 hosts of size 136
16/2/2021 – 15:21:52 - - host memory usage: 398144 bytes, maximum: 33554432
16/2/2021 – 15:21:52 - - Core dump size set to unlimited.
16/2/2021 – 15:21:52 - - AF_PACKET: Setting IPS mode
16/2/2021 – 15:21:52 - - allocated 3670016 bytes of memory for the defrag hash… 65536 buckets of size 56
16/2/2021 – 15:21:52 - - preallocated 65535 defrag trackers of size 160
16/2/2021 – 15:21:52 - - defrag memory usage: 14155616 bytes, maximum: 33554432
16/2/2021 – 15:21:52 - - flow size 320, memcap allows for 419430 flows. Per hash row in perfect conditions 6
16/2/2021 – 15:21:52 - - stream “prealloc-sessions”: 2048 (per thread)
16/2/2021 – 15:21:52 - - stream “memcap”: 67108864
16/2/2021 – 15:21:52 - - stream “midstream” session pickups: disabled
16/2/2021 – 15:21:52 - - stream “async-oneside”: disabled
16/2/2021 – 15:21:52 - - stream “checksum-validation”: enabled
16/2/2021 – 15:21:52 - - stream.“inline”: enabled
16/2/2021 – 15:21:52 - - stream “bypass”: disabled
16/2/2021 – 15:21:52 - - stream “max-synack-queued”: 5
16/2/2021 – 15:21:52 - - stream.reassembly “memcap”: 268435456
16/2/2021 – 15:21:52 - - stream.reassembly “depth”: 1048576
16/2/2021 – 15:21:52 - - stream.reassembly “toserver-chunk-size”: 2579
16/2/2021 – 15:21:52 - - stream.reassembly “toclient-chunk-size”: 2657
16/2/2021 – 15:21:52 - - stream.reassembly.raw: enabled
16/2/2021 – 15:21:52 - - stream.reassembly “segment-prealloc”: 2048
16/2/2021 – 15:21:52 - - fast output device (regular) initialized: fast.log
16/2/2021 – 15:21:52 - - eve-log output device (regular) initialized: eve.json
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘alert’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘anomaly’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘http’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘dns’
16/2/2021 – 15:21:52 - - eve-log dns version not set, defaulting to version 2
16/2/2021 – 15:21:52 - - eve-log dns version not set, defaulting to version 2
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘tls’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘files’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘smtp’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘ftp’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘rdp’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘nfs’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘smb’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘tftp’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘ikev2’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘dcerpc’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘krb5’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘snmp’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘rfb’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘sip’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘dhcp’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘ssh’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘mqtt’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘stats’
16/2/2021 – 15:21:52 - - enabling ‘eve-log’ module ‘flow’
16/2/2021 – 15:21:52 - - stats output device (regular) initialized: stats.log
16/2/2021 – 15:21:52 - - Delayed detect disabled
16/2/2021 – 15:21:52 - - Running in live mode, activating unix socket
16/2/2021 – 15:21:52 - - SSSE3 support not detected, disabling Hyperscan for SPM
16/2/2021 – 15:21:52 - - pattern matchers: MPM: ac, SPM: bm
16/2/2021 – 15:21:52 - - grouping: tcp-whitelist (default) 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
16/2/2021 – 15:21:52 - - grouping: udp-whitelist (default) 53, 135, 5060
16/2/2021 – 15:21:52 - - prefilter engines: MPM
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for http_uri
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for http_raw_uri
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for http_request_line

16/2/2021 – 15:21:52 - - using shared mpm ctx’ for tls.sni
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for tls.cert_issuer
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for tls.cert_subject
16/2/2021 – 15:21:52 - - IP reputation disabled
16/2/2021 – 15:21:52 - - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/suricata.rules
16/2/2021 – 15:21:52 - - No rules loaded from suricata.rules.
16/2/2021 – 15:21:52 - - [ERRCODE: SC_ERR_NO_RULES_LOADED(43)] - 1 rule files specified, but no rules were loaded!
16/2/2021 – 15:21:52 - - Threshold config parsed: 0 rule(s) found
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for tcp-packet
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for tcp-stream
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for udp-packet
16/2/2021 – 15:21:52 - - using shared mpm ctx’ for other-ip
16/2/2021 – 15:21:52 - - 0 signatures processed. 0 are IP-only rules, 0 are inspecting packet payload, 0 inspect application layer, 0 are decoder event only
16/2/2021 – 15:21:52 - - building signature grouping structure, stage 1: preprocessing rules… complete
16/2/2021 – 15:21:52 - - TCP toserver: 0 port groups, 0 unique SGH’s, 0 copies
16/2/2021 – 15:21:52 - - TCP toclient: 0 port groups, 0 unique SGH’s, 0 copies
16/2/2021 – 15:21:52 - - UDP toserver: 0 port groups, 0 unique SGH’s, 0 copies
16/2/2021 – 15:21:52 - - UDP toclient: 0 port groups, 0 unique SGH’s, 0 copies
16/2/2021 – 15:21:52 - - OTHER toserver: 0 proto groups, 0 unique SGH’s, 0 copies
16/2/2021 – 15:21:52 - - OTHER toclient: 0 proto groups, 0 unique SGH’s, 0 copies
16/2/2021 – 15:21:52 - - Unique rule groups: 0
16/2/2021 – 15:21:52 - - Builtin MPM “toserver TCP packet”: 0
16/2/2021 – 15:21:52 - - Builtin MPM “toclient TCP packet”: 0
16/2/2021 – 15:21:52 - - Builtin MPM “toserver TCP stream”: 0
16/2/2021 – 15:21:52 - - Builtin MPM “toclient TCP stream”: 0
16/2/2021 – 15:21:52 - - Builtin MPM “toserver UDP packet”: 0
16/2/2021 – 15:21:52 - - Builtin MPM “toclient UDP packet”: 0
16/2/2021 – 15:21:52 - - Builtin MPM “other IP packet”: 0
16/2/2021 – 15:21:52 - - AF_PACKET IPS mode activated eth0->eth1
16/2/2021 – 15:21:52 - - Using flow cluster mode for AF_PACKET (iface eth0)
16/2/2021 – 15:21:52 - - eth0: disabling gro offloading
16/2/2021 – 15:21:52 - - eth0: disabling tso offloading
16/2/2021 – 15:21:52 - - eth0: disabling gso offloading
16/2/2021 – 15:21:52 - - eth0: disabling sg offloading
16/2/2021 – 15:21:52 - - eth0: enabling zero copy mode by using data release call
16/2/2021 – 15:21:52 - - Going to use 1 thread(s)
16/2/2021 – 15:21:52 - - AF_PACKET IPS mode activated eth1->eth0
16/2/2021 – 15:21:52 - - Using flow cluster mode for AF_PACKET (iface eth1)
16/2/2021 – 15:21:52 - - eth1: disabling gro offloading
16/2/2021 – 15:21:52 - - eth1: disabling tso offloading
16/2/2021 – 15:21:52 - - eth1: disabling gso offloading
16/2/2021 – 15:21:52 - - eth1: disabling sg offloading
16/2/2021 – 15:21:52 - - eth1: enabling zero copy mode by using data release call
16/2/2021 – 15:21:52 - - Going to use 1 thread(s)
16/2/2021 – 15:21:52 - - Found an MTU of 1500 for ‘eth1’
16/2/2021 – 15:21:52 - - Found an MTU of 1500 for ‘eth0’
16/2/2021 – 15:21:52 - - using 1 flow manager threads
16/2/2021 – 15:21:52 - - using 1 flow recycler threads
16/2/2021 – 15:21:52 - - Running in live mode, activating unix socket
16/2/2021 – 15:21:52 - - Using unix socket file ‘/var/run/suricata/suricata-command.socket’
16/2/2021 – 15:21:52 - - all 2 packet processing threads, 4 management threads initialized, engine started.
16/2/2021 – 15:21:52 - - Setting AF_PACKET socket buffer to 64535
16/2/2021 – 15:21:52 - - AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1600 frame_nr=2060
16/2/2021 – 15:21:52 - - Setting AF_PACKET socket buffer to 64535
16/2/2021 – 15:21:52 - - AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1600 frame_nr=2060
16/2/2021 – 15:21:52 - - All AFP capture threads are running.

eve.json (32.8 KB)

I would say try removing buffer-size: 64535 use ring-size instead and explicitly disable afpv3 tpacket-v3: no

I made corrections in the config, but the connection does not work, although it does not connect to the network.
The last type of config is as follows:

config

af-packet:

  • interface: eth0
    threads: 1
    defrag: no
    cluster-type: cluster_flow
    cluster-id: 98
    copy-mode: ips
    copy-iface: eth1
    ring-size: 2048
    use-mmap: yes
    tpacket-v3: no
    disable-promisc: yes
  • interface: eth1
    threads: 1
    cluster-id: 97
    defrag: no
    cluster-type: cluster_flow
    copy-mode: ips
    copy-iface: eth0
    ring-size: 2048
    use-mmap: yes
    tpacket-v3: no
    disable-promisc: yes

There may be some recommendations for setting up a Meerkat in a virtualization environment.

Is “MAC address spoofing” applied to at least two v-ethernet interfaces used as Suricata Inline?

Setting for $VM_NAME > Hardware > Network Adapther > Advanced Features > “Enable MAC address spoofing”

No, “MAC address spoofing” not applied. All network adapters (suricata, TestPC) are configured in a similar way (all features of the network card Hyper-v are disabled):


image
image

But I did tests with various options and in the end none of them worked. This is always the picture on the final host:
image

In hyper-v, all interfaces used for traffic forwarding such as bridges must have “MAC address spoofing” turned on. If you can’t connect even though “MAC address spoofing” is used, you’ll need to test the most basic Linux bridge to see if there are any other configuration problems.