Suricata-IDS and file server or storage

Yes. While not absolutely required to demonstrate the functionality of the IPS, having a management interface is useful.

1 Like

Hello,
My Suricata-IDS has three NICs:

# ifconfig
CLIENT: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a00:27ff:fee5:267c  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:e5:26:7c  txqueuelen 1000  (Ethernet)
        RX packets 116  bytes 16859 (16.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24  bytes 4768 (4.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

NAT: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe7b:8f51  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:7b:8f:51  txqueuelen 1000  (Ethernet)
        RX packets 531  bytes 114732 (112.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 481  bytes 120876 (118.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

SERVER: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a00:27ff:febc:c5a7  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:bc:c5:a7  txqueuelen 1000  (Ethernet)
        RX packets 114  bytes 16175 (15.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 5452 (5.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

And my Windows OS server and client network settings are as follows (This is the network settings for the client):

NIC

I started the Suricata-IDS, but I got the following error:

# suricata --af-packet
i: suricata: This is Suricata version 7.0.0 RELEASE running in SYSTEM mode
W: ioctl: Failure when trying to get MTU via ioctl for 'eth0': No such device (19)
E: af-packet: eth0: failed to find interface type: No such device
E: af-packet: eth0: failed to find interface: No such device
E: af-packet: eth0: failed to init socket for interface
E: threads: thread "W#01-eth0" failed to start: flags 0423

This error is related to network card selection, but I don’t know which network card to replace eth0 in the Suricata-IDS configuration file.

It’ll depend on which side of the NAT you want to monitor? Its more a personal deployment question. It helps to physically trace the ethernet cables and determine where you want tap and run there, or just choose one and see if you are getting the data you inspect.

Usually with a NAT box you’d have 2 interfaces. One with the “external” address and one with the “internal”. As I’m usually monitoring the internal machines, I run Suricata on the internal interface.

1 Like

Hello,
Thanks again.
I want to launch Suricata-IDS in IPS mode and block attacks from clients to the server. Should I replace eth0 in the configuration file with the CLIENT NIC?

Hello,
I replaced eth0 with CLIENT in the configuration file and ran Suricata-IDS:

# suricata --af-packet
i: suricata: This is Suricata version 7.0.0 RELEASE running in SYSTEM mode
i: threads: Threads created -> W: 2 FM: 1 FR: 1   Engine started.

How can I make sure that the traffic goes through Suricata-IDS server?

Depending on your network you can ping a machine for which the traffic should pass and check if you see the related event in Suricata. Or add some signature and trigger a specific alert like with http://testmynids.org/uid/index.html

1 Like

Hello,
Thank you so much for your reply.
As I said, I replaced eth0 with CLIENT in the configuration file. The log file contains the following contents:

# cat /var/log/suricata/suricata.log 
[847 - Suricata-Main] 2023-09-19 07:45:15 Notice: suricata: This is Suricata version 7.0.0 RELEASE running in SYSTEM mode
[847 - Suricata-Main] 2023-09-19 07:45:15 Info: cpu: CPUs/cores online: 2
[847 - Suricata-Main] 2023-09-19 07:45:15 Info: suricata: Setting engine mode to IDS mode by default
[847 - Suricata-Main] 2023-09-19 07:45:15 Info: exception-policy: master exception-policy set to: auto
[847 - Suricata-Main] 2023-09-19 07:45:16 Info: ioctl: CLIENT: MTU 1500
[847 - Suricata-Main] 2023-09-19 07:45:16 Info: conf: Running in live mode, activating unix socket
[847 - Suricata-Main] 2023-09-19 07:45:16 Info: logopenfile: fast output device (regular) initialized: fast.log
[847 - Suricata-Main] 2023-09-19 07:45:16 Info: logopenfile: eve-log output device (regular) initialized: eve.json
[847 - Suricata-Main] 2023-09-19 07:45:16 Info: logopenfile: stats output device (regular) initialized: stats.log
[847 - Suricata-Main] 2023-09-19 07:45:23 Info: detect: 1 rule files processed. 35501 rules successfully loaded, 0 rules failed
[847 - Suricata-Main] 2023-09-19 07:45:23 Info: threshold-config: Threshold config parsed: 0 rule(s) found
[847 - Suricata-Main] 2023-09-19 07:45:23 Info: detect: 35504 signatures processed. 1410 are IP-only rules, 5276 are inspecting packet payload, 28606 inspect application layer, 108 are decoder event only
[847 - Suricata-Main] 2023-09-19 07:45:27 Info: runmodes: CLIENT: creating 2 threads
[847 - Suricata-Main] 2023-09-19 07:45:27 Info: unix-manager: unix socket '/var/run/suricata/suricata-command.socket'
[847 - Suricata-Main] 2023-09-19 07:45:27 Info: unix-manager: created socket directory /var/run/suricata/
[847 - Suricata-Main] 2023-09-19 07:45:27 Notice: threads: Threads created -> W: 2 FM: 1 FR: 1   Engine started.
[847 - Suricata-Main] 2023-09-19 07:46:38 Notice: suricata: Signal Received.  Stopping engine.
[847 - Suricata-Main] 2023-09-19 07:46:38 Info: suricata: time elapsed 71.410s
[847 - Suricata-Main] 2023-09-19 07:46:39 Info: counters: Alerts: 0
[847 - Suricata-Main] 2023-09-19 07:46:39 Notice: device: CLIENT: packets: 12, drops: 0 (0.00%), invalid chksum: 0

I have some questions:

1- What do you mean by signature and http://testmynids.org/uid/index.html? Can you show me some examples?

2- I guess the implementation of the virtual environment is wrong. A client and a server without a Suricata-IDS server can see each other, so why should traffic be bypassed by a Suricata-IDS server?

  1. You can trigger this rule with the mentioned site:
alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
  1. Check your interface, maybe with tcpdump, if the forwarded traffic is actually correctly received or not.
1 Like

Hello,
Thanks again.
1- Should I write the above rule in /var/lib/suricata/rules/suricata.rules file?

2- How are the CLIENT and SERVER network cards in the Suricata-IDS server connected to the client and server virtual machines network cards? As I said, the client and a server without a Suricata-IDS server can see each other and this means that the traffic does not pass through the Suricata-IDS server.

  1. Yes you could, depends on how you manage your rules

  2. I don’t know how you configured your network and did the cabling. You need to make sure that the surrounding traffic forwarding is correctly setup and that you see the traffic on the interfaces where you run Suricata. Without more details about your surrounding setup and config, this is hard to tell.

1 Like

Hello,
Thanks again.
If you see the 18th post, then you will notice the configuration of my virtual environment. I guess it is wrong.

Hello,
In my virtual environment, I used two different internal networks for the Client and Server VMs:

intnet1
intnet2

Then, I used two different ranges of IP addresses for the Client and the Server VMs:

Client: 192.168.1.1
Server: 172.16.1.1

After it, I connected one of the Suricata-IDS server’s NICs to the internal network with the name of intnet1 and the other to the internal network with the name of intnet2. In other words, each Suricata-IDS server NIC is connected to a separate internal network. Now, I gave the Suricata-IDS server NICs the IP address within the range of those two networks:

# ifconfig
CLIENT: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.2  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::a00:27ff:fee5:267c  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:e5:26:7c  txqueuelen 1000  (Ethernet)
        RX packets 4505  bytes 304773 (297.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4078  bytes 274727 (268.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

NAT: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe7b:8f51  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:7b:8f:51  txqueuelen 1000  (Ethernet)
        RX packets 1207  bytes 93386 (91.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 905  bytes 213344 (208.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

SERVER: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.1.2  netmask 255.255.255.0  broadcast 172.16.1.255
        inet6 fe80::a00:27ff:febc:c5a7  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:bc:c5:a7  txqueuelen 1000  (Ethernet)
        RX packets 3048  bytes 210281 (205.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3309  bytes 220270 (215.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

On Suricata-IDS server, I enabled the IP forwarding and wrote a routing table on the Client and Server so that these two VMs can see each other through Suricata-IDS server. The Suricata-IDS server acts like a default gateway.
I ran Suricata-IDS:

# suricata --af-packet
i: suricata: This is Suricata version 7.0.0 RELEASE running in SYSTEM mode
i: threads: Threads created -> W: 2 FM: 1 FR: 1   Engine started.

Then, I scanned the Server VM from the Client VM with the Nmap program. Suricata-IDS shows me the following reports:

# cat /var/log/suricata/suricata.log
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: cpu: CPUs/cores online: 2
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: suricata: Setting engine mode to IDS mode by default
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: exception-policy: master exception-policy set to: auto
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: ioctl: CLIENT: MTU 1500
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: conf: Running in live mode, activating unix socket
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: logopenfile: fast output device (regular) initialized: fast.log
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: logopenfile: eve-log output device (regular) initialized: eve.json
[788 - Suricata-Main] 2023-09-27 01:42:11 Info: logopenfile: stats output device (regular) initialized: stats.log
[788 - Suricata-Main] 2023-09-27 01:42:12 Info: detect: 1 rule files processed. 35501 rules successfully loaded, 0 rules failed
[788 - Suricata-Main] 2023-09-27 01:42:12 Info: threshold-config: Threshold config parsed: 0 rule(s) found
[788 - Suricata-Main] 2023-09-27 01:42:12 Info: detect: 35504 signatures processed. 1410 are IP-only rules, 5276 are inspecting packet payload, 28606 inspect application layer, 108 are decoder event only
[788 - Suricata-Main] 2023-09-27 01:42:16 Info: runmodes: CLIENT: creating 2 threads
[788 - Suricata-Main] 2023-09-27 01:42:16 Info: unix-manager: unix socket '/var/run/suricata/suricata-command.socket'
[788 - Suricata-Main] 2023-09-27 01:42:16 Notice: threads: Threads created -> W: 2 FM: 1 FR: 1   Engine started.
[788 - Suricata-Main] 2023-09-27 01:57:49 Notice: suricata: Signal Received.  Stopping engine.
[788 - Suricata-Main] 2023-09-27 01:57:49 Info: suricata: time elapsed 932.893s
[788 - Suricata-Main] 2023-09-27 01:57:50 Info: counters: Alerts: 29
[788 - Suricata-Main] 2023-09-27 01:57:50 Notice: device: CLIENT: packets: 3159, drops: 0 (0.00%), invalid chksum: 0

And:

# cat /var/log/suricata/fast.log 
09/27/2023-01:38:28.006570  [**] [1:2200025:2] SURICATA ICMPv4 unknown code [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {ICMP} 192.168.1.1:8 -> 172.16.1.2:9
09/27/2023-01:38:28.006622  [**] [1:2200025:2] SURICATA ICMPv4 unknown code [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {ICMP} 172.16.1.2:0 -> 192.168.1.1:9
09/27/2023-01:39:04.037795  [**] [1:2260000:1] SURICATA Applayer Mismatch protocol both directions [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {TCP} 192.168.1.1:1052 -> 172.16.1.1:135

What is your opinion? Does Suricata-IDS work properly?

Thank you.

Unlikely. You need to test that you have the connectivity you need as well.

One problem with this setup is that your Linux machine appears to be setup for routing, but you are using AF_PACKET IPS which is a bridging setup. The 2 are not compatible.

What I recommend is to start by removing Suricata, get your network and connectivity working as expected. Then:

  • If a routing setup, add the iptables/netfilter rules to use NFQ IPS mode
  • If a bridging setup, stop the Linux bridge, and replace with Suricata in AF_PACKET IPS mode.
1 Like

Hello,
Thank you so much for your reply.
Why should I delete Suricata-IDS? I can reset the settings:
1- Delete the IP addresses that I have manually assigned to the NICs of the Suricata-IDS server and disable IP forwarding.

2- Delete the routing command that I have written in the VMs.

3- Change the IP addresses of the VMs to 192.168.1.1/24 and 192.168.1.2/24.

The problem with the virtual environment was that I had used the same name (intnet1) for both internal networks in the VMs (Client and Server) settings and because of this, VMs could ping each other.

After it, If everything works correctly, when I run the suricata --af-packet command, Suricata-IDS should establish a connection between the two VMs and they can ping each other. Am I right?

Just to keep things simple.

Put in the box that will become your Suricata IPS machine.

Get the network configuration working, and make sure you have the connectivity you need. This allows you to be concerned with the network first. Maybe then deploy Suricata as a passive monitor first, this requires no network reconfiguration on the host and will allow you to verify the traffic you expect to see is being seen. When that is all good, deploy Suricata. Then you know whether you are debugging your network or Suricata.

You also then know how to stop Suricata and keep your network working if you suspect there is a problem with Suricata.

Sorry, I can’t help with your virtualized setup. Virtualization adds additional layers of abstraction and complexity and just does not behave as real network cards and cables do.

1 Like

Hello,
Thanks again.
In my opinion, the network is correct and as I said, I had not done the naming in the network settings.
Your words raised several questions for me:

1- How to deploy Suricata-IDS as a passive monitor?

2- Suricata-IDS’s reports show that it sees traffic exchanged between two computers. Isn’t it?

3- In bridging mode, when the Suricata-IDS server goes down, then the clients are disconnected from the server. So, I should choose the right strategy between the NFQ IPS mode and the AF_PACKET IPS mode. Right?

  1. This is how Suricata works by default. suricata -i enp10s0 or something will run it passively on the specified device.

  2. As you were using bridge mode, yes, it was blindly copying packets from one interface to the other. But probably not in a way that would make your network work. Your Linux kernel was probably routing the packets as well.

  3. Yes, in bridging mode when Suricata goes does there is no fallback. You could do some custom scripting to spin up a Linux bridge with ip and brctl. NFQ has a --queue-bypass option to fail open when Suricata goes away.

1 Like

Hello,
Thank you so much.
So, in the current setup, does Suricata-IDS has no control over the packets being exchanged between VMs because I have IP forwarding enabled?

Suricata thinks it has control. Like I said, its copying packets from one interface to another. But this is likely not what your network is expecting. You are bridging the packets (Suricata) and routing them (Linux). This is surely not what you want.

1 Like

Hello,
Thanks again.
With the current configuration (Suricata-IDS server as default gateway) I want to do my test and then switch to bridging mode.
I have three questions:

1- Does IP forwarding has to be enabled on the Suricata-IDS server?

2- Are the iptables rules that I should run on Suricata-IDS server as follows?

$ Sudo iptables -I FORWARD -i CLIENT -o SERVER -j NFQUEUE
$ sudo iptables -I FORWARD -i SERVER -o CLIENT -j NFQUEUE

3- Finally, should I run Suricata-IDS as follows?

$ sudo suricata -c /etc/suricata/suricata.yaml -q 0