Guest User!

You are not Sophos Staff.

This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

SFOS 19.5.1 MR-1 LoggingDaemon / Garner service DEAD

Hello, I logged into my Sophos XG firewall this morning and noticed that the LoggingDaemon/Garner service is dead.  I tried restarting it from command line and of course it would not start.  I started digging through related threads on this forum, but I have not been able to find a solution. My next step it to restart the firewall over the weekend, but if there is anything else I can do before creating a ticket it would be appreciated.

Here are outputs for logs, etc.:

SG230_WP01_SFOS 19.5.1 MR-1-Build278 HA-Primary# df -kh
Filesystem Size Used Available Use% Mounted on
none 1.5G 1.4M 1.4G 0% /
none 3.8G 904.0K 3.8G 0% /dev
none 3.8G 24.7M 3.8G 1% /tmp
none 3.8G 14.7M 3.8G 0% /dev/shm
/dev/boot 126.2M 30.5M 93.0M 25% /boot
/dev/mapper/mountconf
954.9M 110.3M 840.7M 12% /conf
/dev/content 10.5G 649.7M 9.8G 6% /content
/dev/var 80.7G 34.5G 46.2G 43% /var

SG230_WP01_SFOS 19.5.1 MR-1-Build278 HA-Primary# ls -lahr /var/cores
-rw------- 1 root 0 617.4M Aug 10 2020 core.nsg_async_io
-rw------- 1 root 0 67.2M Apr 6 18:14 core.garner
-rw------- 1 nobody nobody 48.5M Apr 5 22:07 core.bgpd
-rw------- 1 root 0 151.0K Apr 6 18:14 20f0f15a-ec26-41fe-d24427bb-a2677758.dmp
drwxr-xr-x 44 root 0 4.0K Apr 7 08:00 ..
drwxrwxrwt 2 root 0 4.0K Apr 6 18:14 .

SG230_WP01_SFOS 19.5.1 MR-1-Build278 HA-Primary# service -S | grep garner
garner DEAD


SG230_WP01_SFOS 19.5.1 MR-1-Build278 HA-Primary# tail -n 50 /log/garner.log
MESSAGE Apr 07 13:37:03Z [4151619328]: no_of_nodes: 36
MESSAGE Apr 07 13:37:03Z [4151619328]: size of tree 720
MESSAGE Apr 07 13:37:03Z [4151619328]: height_of_tree : 2
MESSAGE Apr 07 13:37:03Z [4151619328]: no_of_nodes: 2
MESSAGE Apr 07 13:37:03Z [4151619328]: size of tree 40
MESSAGE Apr 07 13:37:03Z [4151619328]: setting ssl threads locks...
MESSAGE Apr 07 13:37:03Z [4151619328]: ssl thread setup done
MESSAGE Apr 07 13:37:03Z [4151619328]: Daemon initialization complete
svc:init_uid_continue_mode:Error:cannot get the data page /appval20
SFEVENTSFTS: Apr 07 13:37:03Z:sfeventsfts_set_permitted_diskspace: permitted_diskspace 12999417856
SFEVENTSFTS: Apr 07 13:37:03Z:sfeventsfts_set_permitted_diskspace: used diskspace 12989100048
[ghb] ghb_init successful
MESSAGE Apr 07 13:37:03Z [4151619328]: [SCM::scm_init] /cfs/system/logging/cm.conf
[ghb] Connection to heartbeatd established
ERROR Apr 07 13:37:03Z [4151619328]: [SCM::scm_init_config_data] Invalid config object
ERROR Apr 07 13:37:03Z [4151619328]: [SCM::scm_init] scm_init_config_data failed
ERROR Apr 07 13:37:03Z [4151619328]: /lib/garner/outputplugin/libscm.so: init failed
ERROR Apr 07 13:37:03Z [4151619328]: parent_main: output plugin initialization failed
NOTIFICATIONS: Apr 07 13:37:03Z:notifications_output: plugin handle invalid
ERROR [CRFORMATTER] Apr 07 13:37:03Z [4151619328]: crformatter_output: plugin handle invalid
SFEVENTSFTS: Apr 07 13:37:03Z:sfeventsfts_output: output_data_list is NULL
ERROR [CRFORMATTER] Apr 07 13:37:03Z [4151619328]: crformatter_output: plugin handle invalid
ERROR Apr 07 13:37:03Z [4151619328]: centralreporting_output[CentralReporting]Plugin not Initialized or Invalid handle NULL
ERROR [CRFORMATTER] Apr 07 13:37:03Z [4151619328]: crformatter_output: plugin handle invalid
Plugin not Initialized or Invalid handle NULL
sethreshold_output: plugin handle invalid
ERROR Apr 07 13:37:03Z [4151619328]: resolver_output: plugin handle invalid
ERROR Apr 07 13:37:03Z [4151619328]: resolver_output: plugin handle invalid
ERROR Apr 07 13:37:03Z [4151619328]: resolver_output: plugin handle invalid
ERROR Apr 07 13:37:03Z [4151619328]: resolver_output: plugin handle invalid
Apr 07 13:37:03Z: OPPOSTGRES: oppostgres_output: plugin handle invalid
ERROR Apr 07 13:37:03Z [4151619328]: garner_main::calling garner_shutdown
MESSAGE Apr 07 13:37:03Z [4151619328]: garner: Closing servers

Freeing node[PortE2]
Freeing node[PortE3]
Freeing node[PortE0]
Freeing node[GuestAP]
Freeing node[reds1]
Freeing node[Port2]
Freeing node[PortE1]
Freeing node[Port1]
Freeing node[reds2]
Freeing node[ToCoreLAG]
Freeing node[PortE5]
Freeing node[reds3]
Freeing node[PortE4][ghb] Connection to heartbeatd closed
MESSAGE Apr 07 13:37:03Z [4151619328]: cleaning up ssl locks...
MESSAGE Apr 07 13:37:03Z [4151619328]: ssl thread cleanup done
MESSAGE Apr 07 13:37:03Z [4151619328]: garner: Shutdown normally


SG230_WP01_SFOS 19.5.1 MR-1-Build278 HA-Primary# tail -n 50 /log/reportdb.log
5827 2023-04-05 05:18:19.502 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
32278 2023-04-05 09:00:46.797 GMTERROR: canceling statement due to user request
32278 2023-04-05 09:00:46.797 GMTSTATEMENT: INSERT INTO available_webusgdatav9_1679007302 (username,hostipv6,domain,content,category,url,bytes,application,categorytype,usergroup,ruleid,msgid,activityname,conn_id,upload_filename,download_filename,upload_filetype,download_filetype,classification,app_id,is_cloud_application,app_parent,override_token,override_name,override_authorizer) VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25)
5827 2023-04-05 17:13:06.071 GMTLOG: checkpoints are occurring too frequently (6 seconds apart)
5827 2023-04-05 17:13:06.071 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-05 17:18:05.236 GMTLOG: checkpoints are occurring too frequently (3 seconds apart)
5827 2023-04-05 17:18:05.236 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-05 17:18:09.176 GMTLOG: checkpoints are occurring too frequently (4 seconds apart)
5827 2023-04-05 17:18:09.176 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-05 17:18:12.317 GMTLOG: checkpoints are occurring too frequently (3 seconds apart)
5827 2023-04-05 17:18:12.317 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-05 21:18:04.735 GMTLOG: checkpoints are occurring too frequently (3 seconds apart)
5827 2023-04-05 21:18:04.735 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:13:09.913 GMTLOG: checkpoints are occurring too frequently (4 seconds apart)
5827 2023-04-06 05:13:09.913 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:18:07.135 GMTLOG: checkpoints are occurring too frequently (4 seconds apart)
5827 2023-04-06 05:18:07.135 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:18:09.459 GMTLOG: checkpoints are occurring too frequently (2 seconds apart)
5827 2023-04-06 05:18:09.459 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:18:13.545 GMTLOG: checkpoints are occurring too frequently (4 seconds apart)
5827 2023-04-06 05:18:13.545 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:18:15.651 GMTLOG: checkpoints are occurring too frequently (2 seconds apart)
5827 2023-04-06 05:18:15.651 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:18:17.058 GMTLOG: checkpoints are occurring too frequently (2 seconds apart)
5827 2023-04-06 05:18:17.058 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:18:19.566 GMTLOG: checkpoints are occurring too frequently (2 seconds apart)
5827 2023-04-06 05:18:19.566 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 05:18:21.115 GMTLOG: checkpoints are occurring too frequently (2 seconds apart)
5827 2023-04-06 05:18:21.115 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 17:13:05.876 GMTLOG: checkpoints are occurring too frequently (13 seconds apart)
5827 2023-04-06 17:13:05.876 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 17:18:04.743 GMTLOG: checkpoints are occurring too frequently (3 seconds apart)
5827 2023-04-06 17:18:04.743 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 17:18:08.465 GMTLOG: checkpoints are occurring too frequently (4 seconds apart)
5827 2023-04-06 17:18:08.465 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 17:18:10.647 GMTLOG: checkpoints are occurring too frequently (2 seconds apart)
5827 2023-04-06 17:18:10.647 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
5827 2023-04-06 21:18:05.603 GMTLOG: checkpoints are occurring too frequently (3 seconds apart)
5827 2023-04-06 21:18:05.603 GMTHINT: Consider increasing the configuration parameter "checkpoint_segments".
23351 2023-04-06 23:14:26.752 GMTLOG: unexpected EOF on client connection with an open transaction
26934 2023-04-06 23:14:26.752 GMTLOG: unexpected EOF on client connection with an open transaction
27441 2023-04-06 23:14:26.753 GMTLOG: unexpected EOF on client connection with an open transaction
27368 2023-04-06 23:14:26.754 GMTLOG: unexpected EOF on client connection with an open transaction
24723 2023-04-06 23:14:26.759 GMTLOG: unexpected EOF on client connection with an open transaction
27365 2023-04-06 23:14:26.760 GMTLOG: unexpected EOF on client connection with an open transaction
27356 2023-04-06 23:14:26.760 GMTLOG: unexpected EOF on client connection with an open transaction
27513 2023-04-06 23:14:26.761 GMTLOG: unexpected EOF on client connection with an open transaction
29244 2023-04-06 23:14:26.761 GMTLOG: could not receive data from client: Connection reset by peer
5827 2023-04-07 13:01:06.521 GMTLOG: checkpoints are occurring too frequently (11 seconds apart)

console> system diagnostics show disk
Partition Utilization(%)
configuration 12%
content 6%
report 43%

This seems like it could be an issue:  

SFEVENTSFTS: Apr 07 13:37:03Z:sfeventsfts_set_permitted_diskspace: permitted_diskspace 12999417856
SFEVENTSFTS: Apr 07 13:37:03Z:sfeventsfts_set_permitted_diskspace: used diskspace 12989100048

Thanks



This thread was automatically locked due to age.