(A question from @Shakti S.)
Currently we are accessing the application from Stryker site which have MPLS setup. Hence request going from Agent machine talks to local MPLS and consider that much network time but in actual it will travel over the WAN and reach US where server is residing. Now if we can have ALD enabled then we can get complete network time from Client to server location. As if now it's considering network time from user PC to local MPLS. With DNA we have to merge server and client traces to get optimized WAN, but with synthetic monitoring we have to make ALD enabled I guess and any other approach.
I guess I
am able to explain the concern. ADL under Node segment showing unknown for
server at first row with which application communicating.
Answer by Gary K. ·
For Synthetic Monitoring, ALD is used to estimate the network’s contribution to response time. However, if you have WAN optimization active on a given link, ALD will not work properly. Here’s a quick explanation why:
In an optimized WAN environment, there are two ALD scenarios, which would depend on how the WOC is configured.
1. ALD traffic (on port 2408) bypasses the WOC. In this case, the ALD measurement requests will traverse the WAN and measure the client-to-server latency.
2. If the WOC is configured to optimize traffic on port 2408, the WOC may respond to the ALD SYN request locally (at the remote site), measuring the latency between the client and WOC, which of course would be artificially low.
In case 1, let’s say the network latency is measured accurately at 100 milliseconds. DNA will attempt to apply this value as the network round-trip delay. But some client requests are serviced by the local WOC (via object cache or pre-population, for example). You might end up with a case where a client request completes in 10 milliseconds, yet DNA is going to try to allocate 100 milliseconds to network latency. (You could end up with the case where a response arrives at the client before the request has been made.)
In case 2, the latency will be quite low; so low, DNA will not attempt to adjust the resulting trace. In this case, the results would be the same as having no ALD information; network and server delays are combined.
So the short answer is that, with WAN optimization, a remote agent’s CNS analysis can only separate client delay from network/server delay; it cannot distinguish between network and server delays.
In my experience, many – most, probably – of our customers take the approach of using Synthetic Monitoring’s EUE measurement as a simple response time metric, generally ignoring the CNS breakdown. If there is a problem, then manual troubleshooting procedures are followed.
Some customers, however, take a different approach that adds better insight to the source of delays.
Let’s first reiterate the CNS limitations in an optimized WAN environment, which arise because the local TCP sessions are terminated at the WOCs:
1. At a remote site, an agent sees the optimized network WAN delay and server delay as one metric; it can only break out client delay as a separate metric.
2. At the data center, an agent sees the optimized network WAN delay and client delay as one metric; it can only break out server delay as a separate metric.
To improve fault domain isolation, consider this approach:
1. Locate an agent in the data center. The response time measured is an accurate indicator of application performance, but not of network performance. Of course network performance to different remote sites is important to you, so this is not a complete solution. (It can be a starting point for synthetic monitoring, with good ability to “defend the app” in the case of performance complaints.)
2. Combine measurements from remote synthetic monitoring agents with the data center agent results. When a performance problem occurs at a remote site, compare the remote agent CNS with the data center agent CNS; you can manually approximate network delay from this.
Here’s a simple example:
|Remote agent||Data center agent|
To approximate “path delay” – WAN and WOCs – subtract data center agent server time (accurate measurement of server delay) from remote agent server time (which combines network and server delay). In this example, 6.1 seconds.
There’s one additional important consideration. Ideally, both remote and data center scripts should run at the same time – but there’s no way to guarantee such synchronization. For problems that are quite consistent, this isn’t much of a problem. But for intermittent problems, you risk having one measurement catch the problem and the second measurement capturing only normal behavior. Therefore, you should make sure the total transaction times are similar before applying this approach.
Answer by Benjamin W. ·
For ALD to work it must be possible for the Agent to send a TCP SYN packet (connection request) to the server on port 2408 (configurable), and the server must respond with a TCP RST (because that port is usually unused).
If there is a network component between the Agent and the server which prevents this, then ALD may not work or show incorrect times.
You can find more information here.
Answer by Tomasz S. ·
I think I saw a setting related to ALD in the Registry at the following location:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Compuware\ClientVantage\CurrentVersion\Default Settings\Application Vantage, value name “EnableALD”. If you set it to 1, it should work. After changing it you have to restart all agent services. I don’t know if there is a way to set it via GUI.
Hope this helps.