Using Profiling Data - NodeJS

Now that you have data you have to start using it.

If your profiling data comes from NodeJS, you will have sections contained in square brackets [] that you will look for.

First, check out the summary to see what type of code (JavaScript, GC - Garbage Collection, C++, etc.) is using up the highest percentage of processing power.

Second, look at the section for the type of code that is using the most processing power. Here you will look for functions that seem to be using more than their share. Looking through the summary sections will give you a starting point to filter through your profiling in NodeJS.

Bottom Up

Once you have some profiling data and have some idea of where to look, whether from the steps above for NodeJS or from using something like the flame chart from the section on profiling in the browser, the next step is to figure out what is calling those slow processes and see if we can make some changes.

To find the problem functions use the Bottom Up chart in the browser dev tools or the [Bottom up (heavy) profile] section from the processed profile file for NodeJS. The first step is to start on the top of the chart and find the parent function that seems to be related to a problem.

Insert image here-----------------------------------------------------------

In the above image from NodeJS, you will see that there are several functions in indenting rows that show the ticks or ms, the parent percentage, and the name.

The ticks are a measure of CPU processing time used internally to the OS. Ticks are an absolute measure of the time the CPU was running, and is used in NodeJS and is an equivalent measure to the milliseconds used in browsers developer tools.

The parent percent is a relative percentage. It is the percentage of times the current function ran where it called the parent. Except for the first row which shows the percentage of total calls to the application for which that function is called.

The name is self-explanatory as it indicates the name of the function and where it is in the code.

The keys we are looking for here are what is calling the expensive processes. If we have a reasonably straight path, one where the parent percentage is 100% or close, then we should start at the deepest point where it is close to 100% and then see if you can make any changes to that function to reduce the calls to the higher function. You will want to follow that up as high as you can looking for ways to reduce usage of the higher function.

It could also be that the path is pretty branched, in this case, you probably want to start as high on the tree as you can and see if you can make internal improvements to performance and then work your way down.

Last updated