cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Dynatrace Operator: OneAgent image for cloudNativeFullStack

pahofmann
DynaMight Guru
DynaMight Guru

We are currently updating a customer to the latest operator and want to use the new cloudNativeFullStack mode.

 

The github readme describes it as:

cloudNativeFullStack is a combination of applicationMonitoring with CSI driver and hostMonitoring

However the example for the cloudNativeFullStack configuration does not list the option to specify the image for the OneAgent.

 

Do I have to specify the hostMonitoring part separately when using  cloudNativeFullStack or will it be deployed automatically?  If the latter, how can I configure the OneAgent image to be used?

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net
20 REPLIES 20

dannemca
DynaMight Guru
DynaMight Guru

As far I noticed , it tries to get the latest version of the OneAgent provided by your tenant.

I am not an specialist on the operator configuration (I let this to my skilled Openshift team hehe), so I will keep my eye here on this thread to see if is there a way to "force" the operator to load some specific image instead of the latest available.

 

Site Reliability Engineer @ Kyndryl

At least with the "old" operator (dynatrace-oneagent-operator), the image version and OneAgent version are decupled though. The latest (or specified) OneAgent version is loaded from the tenant by the image. 

 

But the image can be configured, e.g. with the hostMonitoring setup:

# Optional: to use a custom OneAgent Docker image. Defaults to the image from
# the Dynatrace cluster
image: ""

 

From that comment, I'd assume the OneAgent image is now loaded from the registry included in the cluster?  Can the one from e.g. the Redhat catalog still be used?

 

The image needs to be synced to the customers' quay instance.

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Afaik this had been asked multiple times on the forum, I don't think this is possible.

I hope @mreider can bring some light here if this is possible or planned.

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Haven't really found anything while searching. 

 

As the image parameter is still there for classicFullStack and hostMonitoring it would be strange if the same wasn't true for the hostMonitoring part of cloudNativeFullStack though.

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

techean
Dynatrace Champion
Dynatrace Champion

Did you tried customizing the yaml resource for cloudnativefullstack specifying the image? It wont execute and throw error if not supported. If its not specified in CR resource than it will be taking the reference from kuberenets.yaml default file.

KG

As Matthew mentioned above setting the image is currently not supported. 

So the question is what the default image used for the host monitoring is when using cloudNativeFullStack with operator 0.6.0

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

techean
Dynatrace Champion
Dynatrace Champion

Yes.. the default resource file than custom resource of kn8s should have reference to public registry which would take the latest image available. I dont have answer to which specific version it would be. Needed to verify the registry resources,

KG

mreider
Dynatrace Advisor
Dynatrace Advisor

Hi @pahofmann and others. First allow me to explain why this is a problem, and then give you the good news about changes coming in the short-term.

 

The reason we "removed" the ability to specify an image for Cloud-Native Full-Stack is because there is no image to offer (yet). The Classic Full-Stack image includes code module binaries, as well as the ability to auto-detect whether a container is using MUSL or glibc. This is not possible in Cloud-Native Full-Stack since code modules are injected using a cloud-native webhook, rather than using classic's fancy Linux tricks in the OneAgent (host agent). Yes, we have code modules available for musl, and glibc, but there is no combined image. Offering these would mean forcing your customer to annotate pods with MUSL or glibc, which is an automation anti-pattern in our opinion.

 

Ok, so that's the reason why we didn't release the feature at the same time as Cloud-Native Full-Stack. Now for the good news.

 

In the next month or so (fingers crossed🤞) we are going to offer the "image" key/value pair for Cloud-Native Full-Stack. Instead of waiting for a combined MUSL / glibc image, though, we are going to document and support a tested version of a Dockerfile that shows how to pull the right assets, again from the cluster, into an image that you can store in your own registry. Ultimately, this is a preferable solution for most customers anyhow, as storing an image in your own registry allows you to use whatever tools you've built around that registry ( CI / CD / Scanning / etc ). 

 

Longer term we have a plan to offer this, and other assets, in a new-and-improved registry at the dynatrace cluster. The current registry is limited, and has some performance issues as it's building images dynamically, and supports pre-configured images (api / tokens), which is a 👍  feature, but unimportant for automating deploys across clusters. The new-and-improved registry will be available later this year (more fingers crossed 🤞) and it will offer host agents, code modules, activegates, and any other components as we continue to offer Cloud-Native containerized goodness into the future. We also plan to sign these images using something like cosign - but one thing at a time.

 

Hope that helps clarify. I am trying to use the community forum more these days, so I'll announce this stuff here as we get closer.

 

Best,

 

M

 

Kubernetes beatings will continue until morale improves

Hey @mreider,

that's a good outlook to the future!

 

We need to migrate the operator to a newer version in the next two weeks however, so waiting is not an option unfortunately.  We'd prefer using cloudNativeFullStack, as this would mitigate issues with oneagent pods not being up before applications start on cluster restarts. 

 

It's still unclear to me how cloudNativeFullStack would work right now then. Will it work properly with Operator Version 0.6.0? 

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

mreider
Dynatrace Advisor
Dynatrace Advisor

Operator 0.6.0 - with Cloud-Native Full-Stack will pull the OneAgent and Codemodule binaries from the Dynatrace cluster and mount them to pods using the CSI Pod. Operator 0.7.0 will offer the new image key / value pair as an alternative.

Kubernetes beatings will continue until morale improves

Okay, that is as I expected but still leaves my initial question from this post open though, as that is only the applicationMonitoring part of cloudNativeFullStack.

 

The description is: 

cloudNativeFullStack is a combination of applicationMonitoring with CSI driver and hostMonitoring

 

Do I have to specify the hostMonitoring part separately when using  cloudNativeFullStack or will it be deployed automatically?  If the latter, how can I configure the OneAgent image to be used for the hostMonitoring part?

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

mreider
Dynatrace Advisor
Dynatrace Advisor

cloudNativeFullStack is a combination. If you use this option you will get both a host agent and a code module automatically.

 

The image key / value is unavailable for cloudNativeFullStack - both for the host agent and the code module for cloudNativeFullStack. I guess we could have offered one, and not the other, but this is an awkward half-solution, both for our engineering team (having to write lots of conditional logic that will be thrown away later), and for you, the user. We will offer the full solution - with the image key / value for both the host and code module in 0.7.0 in less than a month (🤞).

Kubernetes beatings will continue until morale improves

Okay, then last question: If the image is not configurable for the host agent, which one will be used? Will it try to pull from dockerhub then?

 

 

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

That is the same for the oneagent-operator, right? The oneagent pod pulls the latest/specified oneagent binary from the cluster. 

 

The documentation for deployment options also lists a OneAgent DaemonSet for cloudNativeFullStack to collect the host metrics. 

So there still needs to be an image for that pod which then pulls the OneAgent binary from the cluster, right?

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

mreider
Dynatrace Advisor
Dynatrace Advisor

OneAgent Operator pulls an image from Dockerhub / RHCC.
Dynatrace Operator pulls a host image from the cluster registry.

 

note: my previous response was illogical so I deleted it to tighten the thread. You might want to delete your response also as now there will be no context 😎.

Kubernetes beatings will continue until morale improves

 

Okay. That makes sense then and should, with a firewall adjustment, hopefully work out. 

I assume the port for the registry is the default (5000)?

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

if it helps, for the Operator - You can modify the kubenetes.yaml and kubernetes-csi.yaml used in the installation to point to an internal repo where you mirror the operator image.

 

There are 2 /3 lines that point to : docker.io/dynatrace/dynatrace-operator:v0.x.x

just do a simple replace to your location & follow standard install approach.

 

you can push image using the following approach.

Store Dynatrace images in private registries in Kubernetes/OpenShift | Dynatrace Docs

 

You will just need to make sure that the k8s cluster where you are deploying has access to your Dynatrace Active Gate (or cluster if your not using AG) so that it can get the one-agent image. 

 

If you can't / don't deploy an AG on the k8s cluster, I find that best way to get around this issue of the one agent image, is to revert back to the legacy VM approach, where you have a VM based Active Gate that you can connect to the initial pull down (single connection point across multiple clusters), all you need is to access port 9999.   

For the operator the image can be specified,  so there is no problem. 

 

The issue was with the OneAgent image, but Matthew clarified that above.

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net


@pahofmann wrote:

[...] We'd prefer using cloudNativeFullStack, as this would mitigate issues with oneagent pods not being up before applications start on cluster restarts. 

@pahofmann you mentioned here something that we encountered with a customer and got further  deployment stuck for quite some time (it causes pods restarting), and appears quite crucial information. 
I did not find anything related on the forum, did you?

Kind regards, Frans Stekelenburg                 Certified Dynatrace Associate | measure.works, Dynatrace Partner

What was your issue? The application pods had to wait a long time for the OneAgent pods to be ready in cloudNativeFullStack?

My customers are all on classicFullStack currently, will switch the first to cloudNativeFullStack soon.

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

Featured Posts