• Forums
    • Public Forums
      • Community Connect
      • Dynatrace
        • Dynatrace Open Q&A
      • Application Monitoring & UEM
        • AppMon & UEM Open Q&A
      • Network Application Monitoring
        • NAM Open Q&A
  • Home /
  • Public Forums /
  • Application Monitoring & UEM /
  • AppMon & UEM Open Q&A /
avatar image
Question by Chris C. · Feb 27, 2014 at 02:53 PM · continuous delivery

Test Measures and Comparing Test Runs

I'm trying to capture ui-driven tests with dynaTrace and compare test runs.  I'm seeing a couple of things that don't make any sense to me and hope somebody can help point out what I'm doing wrong.

I'm making the rest calls to set up the test metadata and start recording.  I'm using user defined timers to have the tests tell dynaTrace which test is running.  And then at the end of the test run, I'm making the call to stop recording.

Over in dynaTrace under Test Automation I see the tests under ui-driven tests.  When I click on a test, I see the test measures at the bottom of the screen.  It looks like I'm seeing multiple rows for what appears to be the same test measure.  For example, I have 5 rows for measure 'Number of resources - JavaScript'.  When I click on some of the rows, the chart displays a single datapoint.  When I click on other rows, I get multiple data points.  When I highlight all of the rows of the same measure, I see all of the data points plotted.

Here's my question–is this normal to see these broken out into multiple rows?  How is dynaTrace separating them?  Is it one per test run?  Or is there some other criteria, such as the test measure value that's causing this?  I've attached a few screen shots to show what I'm talking about.

 

testmeasures_duplicated.jpgNext question has to do with comparing test runs.  I'm recording a session per test run and see those sessions under the Session Storage item.  When I choose two test measure data points and choose to compare them, it open the Test Comparison dashboard with a bunch of tabs.  All of these are blank for me regardless of the test, test measure or data points I choose.  What am I missing?

Thanks in advance for any help or suggestions you can provide.

Chris

Comment

People who like this

0 Show 0
10 |2000000 characters needed characters left characters exceeded
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Toggle Comment visibility. Current Visibility: Viewable by all users

Up to 10 attachments (including images) can be used with a maximum of 50.0 MiB each and 250.0 MiB total.

1 Reply

  • Sort: 
  • Most voted
  • Newest
  • Oldest
avatar image

Answer by Andreas G. · Feb 27, 2014 at 04:43 PM

Hi. Can you look at the agent column in that table that shows all the metrics? The column might not be visible by default but you can enable it. Does it show different names? If so - is it possible you run the same test from multiple test machines?

Comment

People who like this

0 Show 2 · Share
10 |2000000 characters needed characters left characters exceeded
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Toggle Comment visibility. Current Visibility: Viewable by all users

Up to 10 attachments (including images) can be used with a maximum of 50.0 MiB each and 250.0 MiB total.

avatar image Chris C. · Feb 27, 2014 at 04:47 PM 0
Share

I'm running the tests from the same test machine each time.  I added the Agent column and what I see is that the agent is different for each run:  Web[8], Web[7], Web[5], Web[6], Web[2]

Hmmm.  So where is that index coming from/why is it changing.

avatar image Andreas G. ♦ Chris C. · Feb 27, 2014 at 04:54 PM 0
Share

This is an automatic index we apply in case there is already an agent connected with the same name. This default behavior allows us to see each individual agent.

In your case that means that your browsers probably dont close properly after the test. Can that be the case? Do you still see processes lingering around?

How to get started

First steps in the forum
Read Community User Guide
Best practices of using forum

NAM 2019 SP5 is available


Check the RHEL support added in the latest NAM service pack.

Learn more

LIVE WEBINAR

"Performance Clinic - Monitoring as a Self Service with Dynatrace"


JANUARY 15, 3:00 PM GMT / 10:00 AM ET

Register here

Follow this Question

Answers Answers and Comments

1 Person is following this question.

avatar image

Related Questions

Automation Client in an Intrumented app - how to get the app measures as test checkpoints?

DynaTrace 6.3 and Jenkins 1.625.3 connectivity issue

Manage Remote Batch Processes through Eclipse Plugin

PHP and Test Automation

Loadrunner integration in Loadtesting - Unique transaction question

Forum Tags

dotnet mobile monitoring load iis 6.5 kubernetes mainframe rest api dashboard framework 7.0 appmon 7 health monitoring adk log monitoring services auto-detection uem webserver test automation license web performance monitoring ios nam probe collector migration mq web services knowledge sharing reports window java hybris javascript appmon sensors good to know extensions search 6.3+ server documentation easytravel web dashboard kibana system profile purelytics docker splunk 6.1 process groups account 7.2 rest dynatrace saas spa guardian appmon administration production user actions postgresql upgrade oneagent measures security Dynatrace Managed transactionflow technologies diagnostics user session monitoring unique users continuous delivery sharing configuration alerting NGINX splitting business transaction client 6.3 installation database scheduler apache mobileapp RUM php dashlet azure purepath agent 7.1 appmonsaas messagebroker nodejs 6.2 android sensor performance warehouse
  • Forums
  • Public Forums
    • Community Connect
    • Dynatrace
      • Dynatrace Open Q&A
    • Application Monitoring & UEM
      • AppMon & UEM Open Q&A
    • Network Application Monitoring
      • NAM Open Q&A