10 Jul 2020 06:00 PM - last edited on 16 Dec 2021 03:20 PM by MaciejNeumann
Hi Team,
We have installed Dynatrace oneagent on Solaris nodemanager server for which the host is going to unmonitored state after 15 mins and again coming back to monitored state after 1 day. This happened for 6 solaris hosts out of 10 and we installed on 3 hosts a day and the same issue occured two days. We thought it is unable to communicate with Active Gate but after seeing the logs we could see there is a connection established. Then what is making the host going into unmonitored state and coming back to monitored state automatically after a day.
Please suggest what may be the root cause for this issue.
Thanks&Regards,
Rosy
10 Jul 2020 09:20 PM
interesting, can you confirm that firewall isnt blocking it, and that any security app isnt blocking it?
13 Jul 2020 01:28 PM
Thank you for providing the log. I would open up a support ticket. Are you using a DEMO version of Dynatrace? Can you check and ensure your trial is valid or if you have purchased dynatrace, that you have enough licenses for host units/hours? Have you tried to uninstall and reinstall the oneagent? what version of the oneagent are you using? what version is your cluster at?
13 Jul 2020 02:04 PM
Please create a support ticket for that situation, this does not match the designed behavior.
Thanks,
Wolfgang
08 Dec 2020 12:17 PM
Hi,
Have you been able to resolve this issue?
Br,
Sorin
11 Dec 2020 09:53 AM
Do you have enough HU ? if not monitoring will be auto disabled ..
11 Dec 2020 09:56 AM
Yes, we have enough HU licenses. The problem was how Dynatrace needs to be instrumented when deploying on a Weblogic administered through NodeManager and running on Solaris. The issue was fixed.
14 Nov 2022 12:39 PM
Hello @sorin_zaharov @ChadTurner We are facing the same issue with our two DB hosts running on RHEL hosted on VMWARE where we are experiencing same behaviour as the one you have attached in log.txt
Can you let me know what were the resolution steps.
14 Nov 2022 12:57 PM
Hi! Another problem we had was due to the ports use by the watchdog. There were some components configured to run on those servers that used the same ports. You can either configure OA to use other ports, or change the config on the app.