Difference between revisions of "Profiling"
Line 43: | Line 43: | ||
Basically the process is exactly the same as for profiling, except that you should disable optimizations (remove the -O2 flag) and now you have to add '-fprofile-arcs -ftest-coverage' instead of '-pg'. | Basically the process is exactly the same as for profiling, except that you should disable optimizations (remove the -O2 flag) and now you have to add '-fprofile-arcs -ftest-coverage' instead of '-pg'. | ||
Note that there is a tool in SVN ([http://inkscape.svn.sourceforge.net/svnroot/inkscape/gsoc-testsuite/ | Note that there is a tool in SVN ([http://inkscape.svn.sourceforge.net/svnroot/inkscape/gsoc-testsuite/analysis/coverage.py coverage.py]) that can help with analyzing the huge amounts of coverage data. For example, the following will create a .ini-like file that contains a section per source file with for each executed line an execution count, as well as a section with a total (executable) line count (be sure to fill in the right directories for 'src' and 'build/obj'): | ||
: coverage.py -s src -o build/obj > coveragedata.txt | : coverage.py -s src -o build/obj > coveragedata.txt |
Latest revision as of 09:19, 16 August 2008
How to Profile using gprof
Here is a (slightly edited) description of how to profile from Bryce Harrington:
There are essentially three major steps:
A) Compile the application with profiling support
B) Run the application in a "certain way"
C) Run gprof to generate analysis reports
For Step A what you need to do is add the -pg option to the compiler. On Linux you can do this via the CXXFLAGS variable at configure time, on Windows you need to edit build.xml (note that you'll have to add the -pg flag to the compile target AND the link targets). For example, on Linux, in Bash (note that this assumes everything is getting recompiled, if not, do make clean/btool clean first):
- $ CXXFLAGS='-pg' ./configure
- $ make
Step B is where some creativity is needed. You'll probably want to do several profiling runs to get the hang of it. Essentially, you want to exercise the part of the program you're most interested in measuring. For instance, if you're measuring the speed of rendering stars, you may want to create a document full of zillions of stars, and start Inkscape with that file, then immediately close when it finishes rendering.
As part of Step B, a special output file is generated (called gmon.out by default). In Step C, you run gprof on inkscape executable and this file to generate different sorts of reports, such as the flat profile and call graph. For example here's how to get the plain default report:
- $ gprof inkscape gmon.out > report
Read the report file for a table of functions sorted by how much time each function worked during this run of Inkscape. See the gprof manpage for all the analysis report options.
Bryce
Note: It is not necessary to disable optimizations (not doing so might give some odd results now and then, but it also helps to make the timings more realistic).
How to Profile using gcov
You can also use gcov to analyze Inkscape. This tool does not generate any time related information, but it does generate very fine-grained coverage information. It can tell you exactly how many times each line was executed and can be quite useful to see which parts of the code are and which ones aren't executed (often). Which can be used to give some idea of the coverage of tests for example.
Basically the process is exactly the same as for profiling, except that you should disable optimizations (remove the -O2 flag) and now you have to add '-fprofile-arcs -ftest-coverage' instead of '-pg'.
Note that there is a tool in SVN (coverage.py) that can help with analyzing the huge amounts of coverage data. For example, the following will create a .ini-like file that contains a section per source file with for each executed line an execution count, as well as a section with a total (executable) line count (be sure to fill in the right directories for 'src' and 'build/obj'):
- coverage.py -s src -o build/obj > coveragedata.txt
Using '-w test' you can output coverage values for all unit tested files, '-f coveragedata.txt' will use the data from coveragedata.txt instead of recomputing everything using gcov (when using -f the -s and -o switches are unnecessary) and '-c othercoveragedata.txt' will remove any execution counts from its gathered coverage data for lines that have an execution count in othercoveragedata.txt. This last switch can be used to see which lines in unit tested files are executed while rendering an SVG file but not while executing the unit tests, for example (by executing the unit tests, gathering coverage data, rendering, gathering coverage data again and finally comparing the two coverage data files using the -c switch).