• Forums
    • Public Forums
      • Community Connect
      • Dynatrace
        • Dynatrace Open Q&A
      • Application Monitoring & UEM
        • AppMon & UEM Open Q&A
      • Network Application Monitoring
        • NAM Open Q&A
  • Home /
  • Public Forums /
  • Application Monitoring & UEM /
  • AppMon & UEM Open Q&A /
avatar image
Question by Jim P. · Feb 07, 2015 at 07:01 AM ·

Create Custom Memory Incident Alert

I am trying to create an  incident base on measure memory pool utilization (Java Virtual Machine) where utilization is greater than 75%, but less than 90%.  This uses Java Virtual Machine Measure Specific Attributes of Sun/Oracle and Perm Gen.  I created 2 measures where upper severe = 75 and Lower Severe = 90 and put them in a custom incident using avg aggregation and the and logic operator, but i am getting inconsistent. Attached are the involved measures from one successfully generated incident, not sure why 4 display when i am using 2.  I appreciate any feedback, Thank You.

 

Doc1.docx

Comment

People who like this

0 Show 0
10 |2000000 characters needed characters left characters exceeded
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Toggle Comment visibility. Current Visibility: Viewable by all users

Up to 10 attachments (including images) can be used with a maximum of 50.0 MiB each and 250.0 MiB total.

4 Replies

  • Sort: 
  • Most voted
  • Newest
  • Oldest
avatar image

Answer by Jim P. · Feb 11, 2015 at 12:02 AM

Thank you Andreas.  My understanding is that the averages are based on the chart resolution time.  The chart on the left was generated directly from the incidents and shows threshold automatically.  It appears to show from 0 to 29%.  Please let me know if you have anything additional i should try.  I will open a support ticket as you suggest.

Comment

People who like this

0 Show 1 · Share
10 |2000000 characters needed characters left characters exceeded
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Toggle Comment visibility. Current Visibility: Viewable by all users

Up to 10 attachments (including images) can be used with a maximum of 50.0 MiB each and 250.0 MiB total.

avatar image Andreas G. ♦ · Feb 11, 2015 at 12:33 AM 0
Share

You can see the actual resolution when you move your mouse to the top right of the chart dashlet. There is a toolbar where the second option is the resolution. Click on it and you will see which aggregation is currently used.

avatar image

Answer by Andreas G. · Feb 10, 2015 at 08:27 AM

The Average that you see in the table view of the Chart is the Average of the timeframe of the Dashboard, e.g: Average of 30 Minutes. Not saying that this might not be a bug on our side - but - just wanted to clarify that the value you see in the table in the bottom reflects a different timeframe than what you have configured in your incident.

So - if the memory is fluctuating a lot it is possible that the 1 min average is not within 18 and 29 but the 30min average is. Looking at the screenshot though looks like these numbers are pretty consistent and not fluctuating that much. If that is the case you may want to open a support ticket to have an engineer look at this

Andi

Comment

People who like this

0 Show 0 · Share
10 |2000000 characters needed characters left characters exceeded
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Toggle Comment visibility. Current Visibility: Viewable by all users

Up to 10 attachments (including images) can be used with a maximum of 50.0 MiB each and 250.0 MiB total.

avatar image

Answer by Jim P. · Feb 10, 2015 at 03:13 AM


Thanks Andreas

I have tried a number of evaluation timeframes, the latest from last week was at 1 minute.  Below left is the chart generated from the incident itself using the custom measures, and the right is the OOB measure with no threshold set.  I received 2 indcidents and expected 5.  In my test environment, I configured anything greater than 18 (>75% measure) and less then 29 (<90% measure) to return values as seen in testing, see image measures.  I recieved the yellow items back in an alert but expected the others in green as well. Your help is much appreciated.

 

 

Comment

People who like this

0 Show 0 · Share
10 |2000000 characters needed characters left characters exceeded
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Toggle Comment visibility. Current Visibility: Viewable by all users

Up to 10 attachments (including images) can be used with a maximum of 50.0 MiB each and 250.0 MiB total.

avatar image

Answer by Andreas G. · Feb 08, 2015 at 12:49 AM

Whats the evaluation timeframe? Is it 10s? Try to increase that to the next possible timeframe - i think its 1m. and try it again.

Also - try to chart these metrics and see which values are really coming back

Comment

People who like this

0 Show 0 · Share
10 |2000000 characters needed characters left characters exceeded
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Toggle Comment visibility. Current Visibility: Viewable by all users

Up to 10 attachments (including images) can be used with a maximum of 50.0 MiB each and 250.0 MiB total.

How to get started

First steps in the forum
Read Community User Guide
Best practices of using forum

NAM 2019 SP5 is available


Check the RHEL support added in the latest NAM service pack.

Learn more

LIVE WEBINAR

"Performance Clinic - Monitoring as a Self Service with Dynatrace"


JANUARY 15, 3:00 PM GMT / 10:00 AM ET

Register here

Follow this Question

Answers Answers and Comments

1 Person is following this question.

avatar image

Forum Tags

dotnet mobile monitoring load iis 6.5 kubernetes mainframe rest api dashboard framework 7.0 appmon 7 health monitoring adk log monitoring services auto-detection uem webserver test automation license web performance monitoring ios nam probe collector migration mq web services knowledge sharing reports window java hybris javascript appmon sensors good to know extensions search 6.3+ server documentation easytravel web dashboard kibana system profile purelytics docker splunk 6.1 process groups account 7.2 rest dynatrace saas spa guardian appmon administration production user actions postgresql upgrade oneagent measures security Dynatrace Managed transactionflow technologies diagnostics user session monitoring unique users continuous delivery sharing configuration alerting NGINX splitting business transaction client 6.3 installation database scheduler apache mobileapp RUM php dashlet azure purepath agent 7.1 appmonsaas messagebroker nodejs 6.2 android sensor performance warehouse
  • Forums
  • Public Forums
    • Community Connect
    • Dynatrace
      • Dynatrace Open Q&A
    • Application Monitoring & UEM
      • AppMon & UEM Open Q&A
    • Network Application Monitoring
      • NAM Open Q&A