cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

K8s prometheus metrics making crazy amount data point

clc
Guide

We have setup exporter to scrape metric into Dynatrace.

One of the metrics "hubble_drop_total.count" is growing in a crazy rate due to some dimension making a lot of data points.

Prometheus remembers all these data points since pod started, rather than only exposing the metric data for the last minute or so, then Dynatrace grabs the literal value present in the Prometheus exporter, and then ingests it.

Result is going from 60 data point pr hour to 904.000 datapoint pr hour in a week.

We really need to have this metric but cant really see what else to do beside cutting down dimension so much that it looses value.

And we are running on DPS license model so it kinda ramps up the cost.

Any one else that have encountered this issue and seing a good solution so we can keep the data in Dynatrace instead of ingesting them in grafana instead.

0 REPLIES 0

Featured Posts