Need help on Design of multiple instances of Suricata

I have 10 suricata machines as Internal NIDS. We need to reinstall those as base RHEL is supposed to upgrade to RHEL9
We also plan to push these machines to External network (passive mode / packer sniffing mode) and update them to use latest suricata
I was using pulledpork, need to switch to suricata-update, also have subscription to ProofPoint ET rulesets - what are the URLs to talk to Internet for both these?
How can I make all 10 instances centrally managed for rule upgrade, deployments and more important for log collection and monitoring

suricata-update knows how to contact ProofPoint for the ET rule sources. You can run suricata-update list-sources to view the URL it’ll use for ProofPoint and other rule sources.

If you do suricata-update enable-source et/pro it will prompt your for your ET/Pro authentication code and take care of URL handling for you. Once you run suricata-update, you’ll also see the URL used.

It is possible run run suricata-update on a central server to buildt he rules. Not that suricata-update does take into considering the version of Suricata intsalled, as well as what protocols are enabled. So you may want to have Suricata configured on this machine like your deployments are. Doesn’t need to be running. Then maybe Ansible or Salt to push them out?

Elasticsearch is very popular for managing alerts, but there are quite a few products/projects in this space.

Our server do not talk to Internet by default, so if it has to get updates from ET or regular updates from suricata-update pull - which URL should be whitelisted to fetch them

I recommend installing Suricata-Update on a machine with internet access, and running Suricata-Update. It will display all the exact URLs for you. URLs can also change over time, so it’ll also allow you to diagnose issues as well.

You can also look at the raw index which has URL templates: suricata-intel-index/index.yaml at master · OISF/suricata-intel-index · GitHub