I'm experiencing performance issues when using the dynatrace rest webservice.
I did a rest web call on a purepath dashboards (data from disk). Time selection 1 minute.
With the dynatrace client it takes 30 seconds to get the data however with the rest it takes minutes ending in a timeout.
Result: the dynatrace server cpu jumps impacting our dynatrace client users (connection loss)
Dynatrace has a lot of selfmonitoring measures, however i don't see rest webservice measures?
Is this correct?
I seems that the rest web service is running on the dynatrace server. Is it possible
to place it on a separate server?
Is there some best practice documentation for the rest web service?
Answer by Steven L. · Apr 27, 2016 at 09:07 PM
Extra info obtained from Andreas, below 'Dynatrace-Real-Time-Data-Feed-Listener' shows us how we can send data to an external system: https://github.com/Dynatrace/Dynatrace-Real-Time-Data-Feed-Listener
Answer by Andreas G. · Dec 28, 2015 at 11:16 AM
Can you tell me which exact REST SErvice you chose to query that data? There are two REST Interfaces to query data of a dashboard. One creates a Report of a Dashboard (more historic as it also allows PDF, XLS ... generatin) - the other one should just deliver the raw data. It is called XML Reporting. This is the preferred REST to query dashboard data
Answer by Martin E. · Jan 04, 2016 at 12:17 PM
Steven, the approach which you describe should be feasible. It is, in fact, similar to what @Andreas Grabner and I proposed to some other customers in the past. Still wondering why you want to use Hadoop for it. May I assume that you mean Apache Flume? And that you want to stream data through Flume into a SQL server sink?
Finally, you would use some worker script that polls PurePath data from the dashlet stored on the Dynatrace server and transforms the data according to your needs. However, to my knowledge, the XML Reporting feature together with the PurePath Identifiers for XML Reports currently does not support batch operations. Hence, you would need to fetch each single PurePath individually.
Additionally, you may want to increase the number of PurePath nodes being reported on the PurePaths dashlet like so:
Answer by Martin E. · Jan 04, 2016 at 01:01 PM
By the way - you may want to give a NoSQL solution a try (unless SQL is a hard requirement). ElasticSearch and ArangoDB would be an option here and you could ingest the JSON data that comes out of Apache Flume.
Answer by Enrico F. · Feb 27 at 01:42 PM
More or less same problems here....
We're experiencing very slow response times (sometimes in the order of minutes) with 6.5 when accessing individual alerts/incidents using the REST service i.e.
I'm wondering if perhaps there is a missing index in the PWH DB schema? We are able to reproduce significant spikes for the measure "Performance Warehouse - Reading Incidents" each time the above service is queried for a *single* alertid...
AFAIK this was not the case with 6.3.
FYI: Deleting old incidents did not have any impact.
Answer by Steven L. · Dec 30, 2015 at 04:51 PM
I use this one:
with the parameters:
?purePathDetails=ALL&filter=tf:CustomTimeframe?x:y (one minute)
I'am interested into our sensor data (web request xml) which is into the massive amount of purepath nodes.
Get web request xml (sensor data) from dynatrace production / fieldtest and replay it on an a load test environment (monitored with dynatrace).
ChrisG. told us that it is better to let dynatrace push data to an external component
(for example apache flume)
I would like to setup hadoop flume on top of a sql server. Then dynatrace can push purepath ID's the
sql server. The collected purepath ID's can be used to get REST purepaths (for example 100 purepaths / web call) instead of getting all pure path into one minute, hopefully the rest can handle this, should be more scalable.
Is this a decent approach? Other ideas?
Is this plugin the best way to start 'Big Data Business Transaction Bridge' ?
Answer by Steven L. · Jan 04, 2016 at 12:45 PM
Thx for the information,
you're right, i don't need hadoop, only apache flume on top of the sql server.
At the moment we are trying to push the raw sensor data. @Chris Geebelen
- we need a BT split on the sensor value (in our case plain XML)
- store results in performance warehouse inactive
According the Chris it should be possible to push the split value (xml) to our sql server, checking the documentation a the moment.
Answer by Steven L. · Jan 12, 2016 at 04:07 PM
I have some extra questions,
I would like to use the apache flume sink to save application web request XML data.
I have a BT split on a sensor value (XML data)
--> PWH storage inactive + export results via http active.
--> Enable the dynatrace server setting 'real time business Transactions Feed'
I assume that for each purepath some info will be send to the apache flume sink.
Is sensor data a part of that, will it end up into the sink?
I downloaded the 'bigdatabtbridge_table_creation' from the plugin section.
Checked the 'bigdatabtbridge_table_creation' file, i assume that this will be the storage structure for sql server / elastic search, can i find the sensor location into this file?
If the sensor is not present, can we still send it by changing the 'protobuf defenition' into the real time business transactions feed?
LoadRunner Automated Reporting 2 Answers