I'm trying to capture ui-driven tests with dynaTrace and compare test runs. I'm seeing a couple of things that don't make any sense to me and hope somebody can help point out what I'm doing wrong.
I'm making the rest calls to set up the test metadata and start recording. I'm using user defined timers to have the tests tell dynaTrace which test is running. And then at the end of the test run, I'm making the call to stop recording.
Over in dynaTrace under Test Automation I see the tests under ui-driven tests. When I click on a test, I see the test measures at the bottom of the screen. It looks like I'm seeing multiple rows for what appears to be the same test measure. For example, I have 5 rows for measure 'Number of resources - JavaScript'. When I click on some of the rows, the chart displays a single datapoint. When I click on other rows, I get multiple data points. When I highlight all of the rows of the same measure, I see all of the data points plotted.
Here's my question–is this normal to see these broken out into multiple rows? How is dynaTrace separating them? Is it one per test run? Or is there some other criteria, such as the test measure value that's causing this? I've attached a few screen shots to show what I'm talking about.
testmeasures_duplicated.jpgNext question has to do with comparing test runs. I'm recording a session per test run and see those sessions under the Session Storage item. When I choose two test measure data points and choose to compare them, it open the Test Comparison dashboard with a bunch of tabs. All of these are blank for me regardless of the test, test measure or data points I choose. What am I missing?
Thanks in advance for any help or suggestions you can provide.
Chris
Answer by Andreas G. ·
Hi. Can you look at the agent column in that table that shows all the metrics? The column might not be visible by default but you can enable it. Does it show different names? If so - is it possible you run the same test from multiple test machines?
I'm running the tests from the same test machine each time. I added the Agent column and what I see is that the agent is different for each run: Web[8], Web[7], Web[5], Web[6], Web[2]
Hmmm. So where is that index coming from/why is it changing.
This is an automatic index we apply in case there is already an agent connected with the same name. This default behavior allows us to see each individual agent.
In your case that means that your browsers probably dont close properly after the test. Can that be the case? Do you still see processes lingering around?
JANUARY 15, 3:00 PM GMT / 10:00 AM ET