Answer by Roman S. ·
You don't - would be too much overhead to instrument such a low level method as SockerRead0.
The best way to diagnose is to use the Method Hotspots dashlet to find out who is calling this method, just backtrack far enough up the call stack to understand from where within the application this network reading comes from...
Best, Roman
I am at a loss as to find the values of the parameters of the socketRead0(java.io.FileDescriptor, byte[], int, int, int). I have attached a session that contains a lot of socketread0(). Would you please show me how can I go about finding those values.
Regards,
Anoh Brou
As told before, those values are not present and it is not feasible to capture those as this methods are called way to often to be instrumented.
What question do you want to answer that you think those values will be required for? Why the app is spending so much time on this? The answer to that would be 72% due to various web services calls (the top one with 22% overall gov.research.rppr.service.delegate.reports.ReportServiceDelegateWebService.saveReport), 16% due to Oracle database access and another 11% due to database access to Sybase.
Just expand the method hotspot view and you will see those methods clearly showing up in the caller breakdown.
Best, Roman
I'll add a little more, because I was just finishing saving screenshots to show this exact thing when Roman's reply came in. A shame to waste perfectly good bits. :-)
As Roman mentioned, you would not want to instrument such a low-level method, due to overhead. Further, dT is not going to do anything helpful with a bytearray anyhow.
The standard way to analyze these auto-sensor-based hotspots is to start from them and walk back up the stack. It's not the case that "socketRead0" is your problem. You need to find out what is causing that to happen, and address that root cause.
Here is a view from the Method Hotspots, starting the analysis from socketRead0. You can see that the HttpParser method readRawLine is causing fully half of the work done by socketRead0:
Drilling down from there, underneath readRawLine, we see that (as Roman mentioned) 22% of the total work is caused by the "saveReport" method:
If desired at that point, you can r-click on the saveReport line (or any of those hotspots) and drill into the exact PPs that have those calls. Here's an example. From there, you can figure out why you're doing all the work that you're doing, and perhaps fix it.
Hope this helps,
Rob
Rob,
Thank you for the information. I followed that process to the point where I could no longer attribute the performance issue to another process other than to the socketRead0() process. Given what you said, I tried really hard to pinpoint the source of the socketRead0 issue without success.
That socketRead0 issue is very pervasive in that environment. It is crucial that I find the source of the problem in order to resolve it once and for all.
Regards,
Anoh
Anoh,
I'm confused then. The examples were from your example DTS file. Since you're redacted all context info I can't tell you what I see related to the execution environment, but does it not make sense to investigate "saveReport", or "saveSelections", etc, to see what in your app is causing those socketRead's? Those reads are just being done in response to app-level requests. To me it seems more important to look at arguments (or session-related context) around that, to see what reports specifically are being run, and why they are driving so much data access. At least that's how I'd approach it.
Rob
socketRead0() is a low-level method which refers to an external socket read, or service call.
Time spent in this method is time spent on other tiers.
Using the procedures mentioned above you can see which external services are taking the most time when called by socketRead0(). Also because these are all database and Web Services calls you can visualize the contribution of external services by looking at the Transaction Flow.
JANUARY 15, 3:00 PM GMT / 10:00 AM ET