I am using dynaTrace 4.0. I found that the synchronization time in version 4 is automatically monitored and displayed. I am monitoring a single java server.
With the sensors I have initially defined/”Collections” sensor pack not placed/ I am getting 200ms. Average Transaction Response Time from my server. “Response Time Hotspots” dashlet shows 5.5% in Synchronization(on average).
Once I place the “Collections” sensor pack and execute the same tests I get 2000ms Average Response Time, “Response Time Hotspots” dashlet shows 58.7% in Synchronization. “Method Breakdown by Synchronization Time” points to java.util.ArrayList, java.util.HashSet, java.util.HashMap <init> methods as the biggest contributors.
My questions here are:
What causes this huge difference in all figures?
Are the results that I’m getting with the “Collections” sensor pack placed distorted and how?
Is there a synchronization problem present?
Thanks in advance.
Answer by Andreas G. ·
Can you tell me the average size of your PurePaths when you have the Collection Sensor Placed vs. having it not placed.
Turning the Collection Sensor Pack on usually traces a lot of activity as applications tend to be very heavy on Collection/Map/... usage. Thats also why it is not placed by default and should only be turned on for deep-dive diagnostic use cases. The good news with dynaTrace 4 is that you have Automatic Sensors - so - in case you really have a problem with your Collections you will actually see them in the Response Time and Method Hotspot Dashlet as these methods would be picked up in case they are called too frequently, take too long to execute and impact overall response time.
Answer by Iskren N. ·
Thanks for the quick response!
The PurePath size is the same 217. Structural comparison of purePaths from the two executions show no differences. It seems the overhead is not coming from tracing more methods. What are we looking for with this comparison anyway?
So if we have no sensor on a method and it has not received an auto-sensor, how is its time measured? Do we get the distinction between CPU, Suspension, Wait, I/O for it? If not, where is its time calculated – CPU or Suspension of the Caller method?
Is there a way to measure the time spent(an average or %CPU, sync, etc.) in this classes/methods without placing sensors on them and introducing that much overhead?
Answer by Christian S. ·
turning on the Collections Sensor Pack will place Memory Sensor Rules on all Collection classes. these are no Method Sensor Rules so that's why you don't see a difference in the PurePaths.
with these Memory Rules placed you can do a Selective Memory Dump which gives you an overview of all living collection objects and their allocation point. you could also turn on allocations in the PurePaths by changing the Collection Sensor properties and turn on 'allocations on PurePath', however this would introduce even more overhead.
the synchronization overhead you're seeing is a result of collecting the data for the selective dump, which is done by instrumenting the constructors of these collection classes. this is also the reason for the much higher response times. i guess you have a lot of threads which are working with collections concurrently?
so i'd turn off the Collections Sensor Pack, unless you're interested in having a live overview of all existing collection instances.
what is your use case or what are you trying to find out related to collections?
Answer by Iskren N. ·
Some details on the use case:
First the described case happened in reverse. Fist I got some results with Collections(during my first steps with the tool, obviously not right), then I found that removing the sensor pack reduces the overhead a lot. What I am trying to find out now is could I use the results from my first tests in any way or should I totally discard them. If the results can be used somehow and they are showing a problematic place emphasized by the huge overhead, then I need a way to measure the real impact of this. This was the reason for my questions above:
“So if we have no sensor on a method and it has not received an auto-sensor, how is its time measured? Do we get the distinction between CPU, Suspension, Wait, I/O for it? If not, where is its time calculated – CPU or Suspension of the Caller method?
Is there a way to measure the time spent(an average or %CPU, sync, etc.) in this classes/methods without placing sensors on them and introducing that much overhead?”
I am also interested in that in general – how are such situations handled, traced and reported. Could you please share some details there. I was not able to find much on “synchronization” in the documentation.
In general the Collections were not a point of interest unless they actually impact the performance. In the described case above I really want to know - Are they and how much?
Answer by Christian S. ·
first of all: you should _only_ turn on the memory sensors, if you're really searching for a _memory-related_ problem! otherwise they will impact your runtime performance (as you figured by yourself).
concerning your questions: if there's no instrumented sensor on a method and it's also not caught by auto-sensors then you won't see it of course. in this case the times of the method are added to its parent. however, auto-sensors were built to figure out the parts of your application that are slow, so if you have a performance problem with specific methods then chances are high that they are caught by auto-sensors.
if you want to figure if collections are a runtime performance problem in your environment then you can either rely on auto-sensors or instrument them manually with Method Sensor Rules.
in general, synchronization times are handled as other times (CPU, exec, ...). you'll see them in the PurePaths as well as in aggregated views as the Response Times Hotspots and Method Hotspots dashlets.
if you just want to find performance hotspots in your application, i'd suggest you rely on auto-sensors & work with Response Times Hotspots and Method Hotspots dashlets to find your hotspots.