cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Miscelaneous Log Analytics questions

jeff_rowell
Helper

I am trying to locate information to better understand the impact of selecting files for “monitoring” with premium Log Analytics (i.e., selection of files to be centrally stored). Some specific questions are:


  1. I found some online documentation that reviews the settings available in a log analytics configuration file (ruxitagentloganalytics.conf). The file contains a MainLoopInterval parameter that controls the frequency at which logs are processed… by default 60 seconds. I presume that the only way to modify this is by directly updating it from the server (i.e., not via the DT interface). Is that correct? Also, do customers find that they need to make changes to this parameter when selecting larger numbers or sizes of files for central storage?
  2. Impact on the network. I presume that data sent from the OneAgent is not compressed and that any compression would occur only when the traffic hits an Active Gate. Is that correct?
  3. Should we be concerned about performance or infrastructure impacts to our Active Gates when selecing files for centralized storage. I.e., any significant overhead related to CPU resources required to compress the additional data. Disk space impacts? Memory impacts?
  4. Does the switch from event-based log file alerting to use of custom metrics based on log file scans have any impact on the “standard” tier licensing. I.e., still only able to configure 5 alerts in the “standard” tier, or can users make use of the custom metrics to create > 5 alerts related to log file contents?
  5. I presume that 1 custom metric is consumed when configuring a metric based on log file contents. Is that the case or are there any dimensional attributes that result in > 1 custom metric being consumed?

4 REPLIES 4

marcin_okraszew
Dynatrace Helper
Dynatrace Helper

1. When you reduce the value, there will be more writes to storage disks (IOPS) and there will be worse compression ratio, but there will be lower data latency. Though, in case the server doesn't t keep up with writing, server will increase the interval to achieve IOPS rate where the storage is able to handle.

2. Due to volume of logs, log agent is compressing them before sending. It is using zStandard compression, which has very good compression ratio to compute power needed. On average it is about 10% original, 25% realistic worst case.

3. Log writting queue can take up to 1% of available memory on a server node. CPU is typically not much affected. For metrics, a single node can process about 2 milion log entries per second.

4. Each host brings a number of free custom metrics. The log metrics can use that quota.

5. Each source log file is resulting in a single metric being consumed.


Thank you for the answers. There are a few things that I am not clear on from you response:

For question 3 I was asking about the Active Gates, but I am not clear whether your answer relates to the Active Gate or to the cluster node... you mention "server node" so I suspect your comments related to the cluster node and not the AG.

For question 4 you appear to be indicating that we would no longer have any limit on the number of alerts that we can configure for log file content. I.e., with the "standard" tier we could implement as many log file alerts as we want, as long as we stay within our licensed volume of custom metrics - is that the case?


Hi Jeff,

i can't help regarding the Active Gate . Yes, Marcin was mentioning the Cluster.

Reg. Question 4: You are correct. The limit is depending on the number of custom metrics.

Peter


Yes, it was about server. When it comes to AG, assume one physical core (not HT) is needed per 50 Mbps traffic (it is about the compressed content). When it comes to memory, it is mostly how long disruptions of connectivity you need to survive. If too much memory is occupied by messages they will be discarded, but log agent will resent the content in such case.


Featured Posts