SELKS with external Elasticsearch server

Hello
I am using SELKS 10 docker version on Ubuntu 22.04.4 and decided to try an external
Elasticsearch server to see if SELKS host runs faster. ES 8.15.0 was installed on Ubuntu 24.04 (using the repository at /artifacts.elastic.co)

echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list

I configured the “Use an external Elasticsearch server” setting on /rules/settings with the external ES IP address and port 9200.

If I click on Check, the external ES server responds correctly “Connected to ES 8.15.0”.
But Hunting section of SELKS (both Dashboard and Events) always returns “No Data”.

I think I miss something to be done at the ES server. Can you send me any documentation?
Thanks

I think you can check the elsticsearch.log for any info. However a new template ( SELKS/docker/containers-data/logstash/templates at master · StamusNetworks/SELKS · GitHub ) would be needed for ES 8

Hello, Peter
elasticsearch.log on the SELKS computer only shows some requests with the 200 status (no errors). Many like these:

2024-08-16 12:27:27,750 GET http://x.y.53.137:9200/logstash-alert-**,logstash-*stamus-*/_search?_source=true&ignore_unavailable=true [status:200 request:0.007s]
2024-08-16 12:27:27,766 GET http://x.y.53.137:9200/logstash-alert-**,logstash-*stamus-*/_search?_source=true&ignore_unavailable=true [status:200 request:0.051s]
2024-08-16 12:27:27,823 GET http://x.y.53.137:9200/logstash-alert-**,logstash-*stamus-*/_search?_source=true&ignore_unavailable=true [status:200 request:0.242s]
2024-08-16 12:28:06,922 GET http://x.y.53.137:9200/_cluster/health [status:200 request:0.003s]
2024-08-16 12:28:06,925 GET http://x.y.53.137:9200/_settings [status:200 request:0.003s]

while on ElasticSrv (the external ES server), the log has lots of warnings and infos.

[2024-08-16T09:31:02,980][INFO ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][56567] overhead, spent [272ms] collecting in the last [1s]
[2024-08-16T09:31:37,135][INFO ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][56601] overhead, spent [260ms] collecting in the last [1s]
[2024-08-16T09:31:52,505][WARN ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][young][56615][718] duration [1.1s], collections [1]/[1.1s], total [1.1s]/[20.1s], memory [603.1mb]->[386.5mb]/[1.9gb], all_pools {[young] [220mb]->[0b]/[0b]}{[old] [341.2mb]->[370.5mb]/[1.9gb]}{[survivor] [41.9mb]->[15.9mb]/[0b]}
[2024-08-16T09:31:52,551][WARN ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][56615] overhead, spent [1.1s] collecting in the last [1.1s]
[2024-08-16T09:39:35,461][WARN ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][young][57075][734] duration [1.9s], collections [1]/[2.6s], total [1.9s]/[23s], memory [1.5gb]->[495mb]/[1.9gb], all_pools {[young] [1gb]->[8mb]/[0b]}{[old] [435mb]->[435mb]/[1.9gb]}{[survivor] [50mb]->[56mb]/[0b]}
[2024-08-16T09:39:35,670][WARN ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][57075] overhead, spent [1.9s] collecting in the last [2.6s]
[2024-08-16T09:41:46,715][WARN ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][G1 Concurrent GC][57184][15] duration [22.5s], collections [1]/[22.6s], total [22.5s]/[23.5s], memory [925mb]->[925mb]/[1.9gb], all_pools {[young] [436mb]->[436mb]/[0b]}{[old] [435mb]->[431mb]/[1.9gb]}{[survivor] [58mb]->[58mb]/[0b]}
[2024-08-16T09:41:46,783][WARN ][o.e.m.j.JvmGcMonitorService] [ElasticSrv] [gc][57184] overhead, spent [22.5s] collecting in the last [22.6s]
[2024-08-16T09:41:46,844][WARN ][o.e.t.ThreadPool         ] [ElasticSrv] timer thread slept for [22.6s/22674ms] on absolute clock which is above the warn threshold of [5000ms]
[2024-08-16T09:41:47,021][WARN ][o.e.t.ThreadPool         ] [ElasticSrv] timer thread slept for [22.6s/22673683366ns] on relative clock which is above the warn threshold of [5000ms]
[2024-08-16T09:42:39,308][INFO ][o.e.c.m.MetadataMappingService] [ElasticSrv] [logstash-http-2024.08.15/e5HOzRbnTIeGEGclDkYZ3A] update_mapping [_doc]
[2024-08-16T09:42:50,390][WARN ][o.e.g.PersistedClusterStateService] [ElasticSrv] writing cluster state took [11012ms] which is above the warn threshold of [10s]; [skipped writing] global metadata, wrote [1] new mappings, removed [1] mappings and skipped [20] unchanged mappings, wrote metadata for [0] new indices and [1] existing indices, removed metadata for [0] indices and skipped [20] unchanged indices
[2024-08-16T09:42:50,697][INFO ][o.e.c.c.C.CoordinatorPublication] [ElasticSrv] after [11.3s] publication of cluster state version [232] is still waiting for {ElasticSrv}{0_SYC7z7R0S23z2MS3d_KA}{W6NpVc3HT-6ewXc0vgI_9w}{ElasticSrv}{localhost}{127.0.0.1:9300}{cdfhilmrstw}{8.15.0}{7000099-8512000}{transform.config_version=10.0.0, ml.machine_memory=4104945664, ml.allocated_processors=4, ml.allocated_processors_double=4.0, ml.max_jvm_size=2055208960, ml.config_version=12.0.0, xpack.installed=true} [SENT_APPLY_COMMIT]

Date and time are similar on both computers.

The elasticsearch7-template.json file at Github you mentioned is exactly the same I already have on the SELKS computer. Should it be edited or renamed? Or this file is for the external ES server?
Thanks

Aha ok so it is not an issue with the template. Do you have any data in the external one ?
I see at leats http [Elasti/cSrv] [logstash-http-2024.08.15/e5HOzRbnTIeGEGclDkYZ3A] update_mapping [_doc]

Hi, Peter

I don’t have any data in it. I thought SELKS would store the new events or network packets in it, but they are still being saved in the SELKS computer (/opt/SELKS/docker/containers-data/suricata/logs).

Confirming that it is not an issue with the template, I installed Elasticsearch 7 on ElasticSrv. SELKS says “No data”, as well.
Thx

Hi,
Question:
Do you see data in Scirius (hunt) and do you see data in the external ES Kibana (seems there is data present judging on the indices creation)?

Hi, Peter

Let me see if I understood:

Scirius HUNTING sections Dashboard and Events show “No data”. If I run tcpdump on ElasticSrv, I can see packets from Scirius talking to port 9200 in both directions.

Scirius OTHER APPS → Evebox shows data. I think this data is not stored on Elasticsearch.
Scirius OTHER APPS → Kibana → Dashboards shows “Error” in all dashboards I tried to open.
Dashboards were ok when ES was local to Scirius…

ElasticSrv:/var/log/elasticsearch has many gc.log.* and elasticsearh*.log and .json with recent dates (including today). I can send you the last lines from these logs if you think this can help.

Thanks

Hi Luis,
did you push an Elasticsearch 8 suitable template initially ?

Hi, Alex.

Sure there is a file called elasticsearch7-template.json:
/opt/SELKS/docker/containers-data/logstash/templates/elasticsearch7-template.json
Peter Manev told me to download a newer version from Github, suitable for ES8. But this one is exactly the same as the one I already had, including the file name, with “7”. I even renamed the file to “8”…
Should I edit this file? Should I create a database prior to starting Scirius?
Anyway I created a new VM as the external Elasticsearch server with version 7 and left the v8 aside. Clicking on the “Check” button on Suricata Management with the check on “Use an external Elasticsearch server” confirms it is now connecting to a version 7. But it didn’t make any difference: Hunting Dashboards and Hunting Events still show “No data”.
Any clue?

Hello

During SELKS installation, there is an option:
"By default, elasticsearch database is stored in a docker volume in /var/lib/docker
With SELKS running, database can take up a lot of disk space
You might want to save them on an other disk/partition
Alternatively, You can specify a path where you want the data to be saved, or hit enter for default.
"
Should this be modified in order to use an external elasticsearch server?
Thanks

Hi,

If you have connected to an external one you dont need to do more here.
Did you mange to get the template ok for ES 8 ?

Thank you

Hi, Peter

I downloaded the template from Github but the one I already have is the same.
I simply have it in the original folder. I didn’t do anything with it.
And I am still having “No data” on Dashboards and Events in the Hunting section in Scirius.
If I check the connection between Scirius and the external Elasticsearh server, it correctly returns the server version, being v7 or v8 (I tried with 2 different VMs: one with ESv8 and the other with ESv7).