I was recently contacted by a member of the team I am working with at my account regarding our Dynatrace Managed setup.
Currently we have a single node cluster that we have hosted on premise in one of their data centers. It is currently monitoring multiple tier 2 applications that are also hosted internally in a few of their data centers.
They asked me to put together some information regarding the three areas below, as they are soon going to be reviewing the architecture of the current setup.
1. High availability setup for Dynatrace (3 nodes)
2. Setting up Dynatrace Managed in Azure
3 Configuring accessibility to Dynatrace Managed outside of the internal network (publicly accessible)
for the first area I know it is pretty easy to setup the 2 other nodes in Dynatrace, but I don't have any experience with moving the cluster from its current on premise location to Azure environment. I did a little bit of hunting around in the forums, and am wondering if anyone has some insight on this?
Can we move the Dynatrace Managed Server to Azure, is the tenant ID going to change? Is this going to require us to reinstall the OneAgents and security gateways for the currently on-boarded applications and data centers (Is there a way to configure them to point to the new cluster instead?)
If anyone has any information they can share regarding the previous points and question, or if you have any information for things that I may have not raised in this post, please comment.
Answer by Hayden M. ·
All of what Patrick said above is valid and very good info. As you can see, I helped out an answered him on the question he linked a bit. Ping me about this internally if you need more help - I did a lot of the same work with my current client and have some good links and other info.
Answer by Patrick H. ·
I have migrated a cluster from a different cloud hoster to azure. You can migrate the cluster without generating a new tenant, so no need to reinstall the OneAgents. Here is the Question I posted when I did the migration.
Basically you add the new cluster nodes in the azure environment, let them sync up with the old on premise node, then you can remove the old node. If you do it like this you lose the transaction storage, so some detail data will be missing, to prevent that you can keep the old node running for as long as transaction storage is configured (default is 10 days) and then remove it.
For me, there was some fiddling with config files involved, as the cluster nodes could not reach each other on the internal IP addresses they where assigned. The solution for me was to set a a simple VPN and use that address while the cluster migration was in progress.
If you have any further question, just comment :)