Difference between revisions of "Testing Inkscape"
(reference to nightly development version) |
|||
(7 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
{{Other languages|en=Testing Inkscape}} | |||
== Testing Inkscape == | == Testing Inkscape == | ||
Line 7: | Line 9: | ||
[http://bugs.launchpad.net/inkscape/ Report a bug] if you find anything that does not behave as it should. A bug report should include at least a step-by-step description of how to trigger the bug and/or a test file that demonstrates the bug (the smaller/more focussed the test file the better). | [http://bugs.launchpad.net/inkscape/ Report a bug] if you find anything that does not behave as it should. A bug report should include at least a step-by-step description of how to trigger the bug and/or a test file that demonstrates the bug (the smaller/more focussed the test file the better). | ||
If you are one of the brave ones who wants to help testing the very latest experimental development version, this is very welcome. Information on where to download this version can be found at the page [[Installing Inkscape]]. | |||
=== Users === | === Users === | ||
The field is wide open. We are keen to receive [http://bugs.launchpad.net/inkscape/ bug reports] and feature requests (in the form of a bug report). These often require analysis, clarification and further action. Anyone can do this. Better still would be to provide patches for any part of the application that is not up to the standard you expect - it is confirmation that the project is evolving. Note that serious testing should be done with an 'unstable' build, either one that you made yourself (see [[CompilingInkscape]]), or a snapshot that you have downloaded. We would also like to hear about areas in which we do not have parity with comparable applications. If you find that you are coming up with interesting ideas concerning shortcomings in Inkscape, or plans for its future, get involved with the Inkscape testers group. | The field is wide open. We are keen to receive [http://bugs.launchpad.net/inkscape/ bug reports] and feature requests (in the form of a bug report). These often require analysis, clarification and further action. Anyone can do this. | ||
Better still would be to provide patches for any part of the application that is not up to the standard you expect - it is confirmation that the project is evolving. Note that serious testing should be done with an 'unstable' build, either one that you made yourself (see [[CompilingInkscape]]), or a snapshot that you have downloaded. We would also like to hear about areas in which we do not have parity with comparable applications. If you find that you are coming up with interesting ideas concerning shortcomings in Inkscape, or plans for its future, get involved with the Inkscape testers group. | |||
We need people to create and update documentation, online help, tutorials and screen shots. Noting defects in these is a perfectly valid form of testing - we do not want releases to go out with obsolete documentation. | We need people to create and update documentation, online help, tutorials and screen shots. Noting defects in these is a perfectly valid form of testing - we do not want releases to go out with obsolete documentation. | ||
Line 20: | Line 24: | ||
* [[ComplianceTesting]] | * [[ComplianceTesting]] | ||
* [ | * [http://home.hccnet.nl/th.v.d.gronde/inkscape/ResultViewer.html Inkscape Regression Test] | ||
* [[InteroperabilityTesting]] | * [[InteroperabilityTesting]] | ||
* [[UsabilityTesting]] | * [[UsabilityTesting]] | ||
Line 27: | Line 31: | ||
See also [[TestingFramework]]. | See also [[TestingFramework]]. | ||
Note: Bryce? Jon? shouldn't the whole of that page be merged here? Or is it better to have this info in two pieces. IMHO wiki pages should not be made too long. | Note: Bryce? Jon? shouldn't the whole of that page be merged here? Or is it better to have this info in two pieces. IMHO wiki pages should not be made too long. | ||
=== | === Rendering tests === | ||
In addition, Inkscape has rendering tests that do not necessarily need a developer to create, run and analyze. The actual tests can be found in SVN (see below). Below you can find information on how to run and create these tests yourself. | |||
<span style="font-size:larger;">See [http://auriga.mine.nu/inkscape/ this list] for up-to-date results.</span> | |||
= | |||
==== Running rendering tests ==== | ==== Running rendering tests ==== | ||
Apart from running low-level unit tests Inkscape can also be tested on a higher level (also see [[SVG Test Suite Compliance]]. Currently (2008-7-26) there is a rendering test tool (along with a few test cases) in SVN ([https://inkscape.svn.sourceforge.net/svnroot/inkscape/gsoc-testsuite/tester/]) which can be used to partially automate rendering tests. | Apart from running low-level unit tests Inkscape can also be tested on a higher level (also see [[SVG Test Suite Compliance]]. Currently (2008-7-26) there is a rendering test tool (along with a few test cases) in SVN ([https://inkscape.svn.sourceforge.net/svnroot/inkscape/gsoc-testsuite/tester/]) which can be used to partially automate rendering tests. | ||
To | To run the rendering tests: | ||
* If needed, compile tester.cpp using 'g++ -o tester tester.cpp' (there is a precompiled .exe in SVN for Windows users). | |||
* Execute runtests.py. If needed you can specify Inkscape's path and a few other things (execute 'runtests.py --help' to see which options are available). | |||
Note that by default only a binary comparison between the output and reference files is used, [http://pdiff.sourceforge.net/ perceptualdiff] (or any other comparison tool that returns zero on success and 1 on failure) can be used to aid comparison of images (see the available options of runtests.py). Note that perceptualdiff (1.0.2) had some problems with transparency, these might be solved by now, and if not, there is a patch in its patch tracker. | |||
The most | To select a subset of tests to perform, specify one or more patterns (with Unix-style wildcards) on the command line. Each pattern is interpreted as specifying a prefix. For example, 'runtest.py bugs' will match any tests whose path relative to the directory with test cases starts with 'bugs' (for example: 'bugsy.svg' or 'bugs/bugXYZ.svg'). | ||
The most common test results are: | |||
* Pass (the output file was matched to a pass reference) | * Pass (the output file was matched to a pass reference) | ||
* Fail (the output file was matched to a fail reference) | * Fail (the output file was matched to a fail reference) | ||
Line 87: | Line 66: | ||
Fail references are used to distinguish between a result that is known wrong and a result that is just (perhaps only slightly) different from the correct rendering. If you are unable to create a pass reference you can even give just a fail reference. | Fail references are used to distinguish between a result that is known wrong and a result that is just (perhaps only slightly) different from the correct rendering. If you are unable to create a pass reference you can even give just a fail reference. | ||
It is also possible to create an SVG file that should produce the exact same output as a test case but uses simpler (or just different) methods. This practice is suggested in the [http://www.w3.org/Graphics/SVG/Test/svgTest-manual.htm#GeneratingthePNG SVG Conformance Test Plan]. For example, if the test case file is called 'testcases/basic/foo.svg' you could create "patch" file called 'testcases/basic/foo-patch.svg'. runtests.py would then use Inkscape to create a pass reference file from that (as 'references/pass/basic/foo-patch.svg') and use it as one of the references. (Note that this reference should in general not be committed to SVN.) | It is also possible to create an SVG file that should produce the exact same output as a test case but uses simpler (or just different) methods. This practice is suggested in the [http://www.w3.org/Graphics/SVG/Test/svgTest-manual.htm#GeneratingthePNG SVG Conformance Test Plan]. For example, if the test case file is called 'testcases/basic/foo.svg' you could create a "patch" file called 'testcases/basic/foo-patch.svg'. runtests.py would then use Inkscape to create a pass reference file from that (as 'references/pass/basic/foo-patch.svg') and use it as one of the references. (Note that this reference should in general not be committed to SVN.) | ||
=== Developers === | |||
==== Build report ==== | |||
There is an 'inkscape build report; which is sent regularly to the inkscape-tester list (and periodically to the developer list, when new problems are seen) that gives a count of warnings spotted in the code. | |||
* Smoketests | |||
* Defects in the build system | |||
==== Running unit tests ==== | |||
There are now some unit tests which should be performed before checking in. These may take some time to complete, and so this cannot be made a requirement for each build (Test Driven Development), nonetheless everyone is on their honour not to 'break the build' by committing code that does not pass these tests. You can execute them by: | |||
* Linux: Just run 'make check', it will build and run them. It should also work on Mac OS X. | |||
* Windows: Use 'buildtool check' (where buildtool is built using 'g++ -O3 -o buildtool buildtool.cpp) to build and run the unit tests. Alternatively you can use dist-all-check to build everything AND run the unit tests. | |||
Cxxtests will generate two (more or less equivalent) result files, an XML file and a text file with the extension 'log'. On Linux, those files are located in (buildpath)/src. | |||
==== Creating unit tests ==== | |||
Inkscape uses the [http://cxxtest.sourceforge.net/ CxxTest] framework. To enhance, modify or extend existing unit tests, just edit the existing test file (....-test.h). | |||
The easiest way to create a new test in a directory which already has some unit tests is to simply copy one of the existing test files, strip it (remove anything specific and rename the class, constructors, etc.) and add some test methods. Take the time to look at the different ASSERT statements CxxTest supports, the TSM_ variants can be especially useful for example when you want to test a lot of different cases. '''Important:''' to make everything build correctly you have to do the following: | |||
* Add the file to the right (already existing) group in the cxxtest target in build.xml | |||
* Append the file to the CXXTEST_TESTSUITES variable in dir/Makefile_insert. Watch the backslashes at the end of the lines. Note that you have to prefix "$(srcdir)/dir/" to your filename, since it is a normal variable not handled by Automake, and you have to use += rather than =, because you're appending to this variable rather than defining it. | |||
# Do like this: | |||
CXXTEST_TESTSUITES += \ | |||
$(srcdir)/dir/first-test.h \ | |||
$(srcdir)/dir/second-test.h | |||
For creating a unit test in a directory which does not have any unit tests yet: | |||
* Update the Windows build system: | |||
** Add a cxxtestpart group to the cxxtest target in build.xml (just copy and modify an existing one). | |||
** | |||
** Add the corresponding .o file to the exclude list of the lib target in build.xml and to the include list of the linkcxxtests target. | |||
* For Unix, no changes other than adding the file to CXXTEST_TESTSUITES in Makefile_insert are necessary. | |||
==== Running tests unattended ==== | ==== Running tests unattended ==== | ||
For unit tests this is no problem, just set up something that runs cxxtests and you can use one of the log files it creates to see how it went. | For unit tests this is no problem, just set up something that runs cxxtests and you can use one of the log files it creates to see how it went. | ||
To be able to run the rendering tests unattended you have to compile Inkscape as a commandline executable | To be able to run the rendering tests unattended on Windows you have to compile Inkscape as a commandline executable to prevent any CRT runtime error dialog boxes (or something similar) from popping up. On Linux and other Unices, this problem doesn't exist. | ||
The teststatus.json file that is generated by runtests.py contains all the test results (in [http://www.json.org/ JSON] format). Note that if you only run a subset of the tests this file retains all the information on tests that do not fall into that subset. It also retains old test results. The result codes in this file can be interpreted as in runtests.py (for example, 0, 1 and 2 stand for pass, fail and new, respectively). | The teststatus.json file that is generated by runtests.py contains all the test results (in [http://www.json.org/ JSON] format). Note that if you only run a subset of the tests this file retains all the information on tests that do not fall into that subset. It also retains old test results. The result codes in this file can be interpreted as in runtests.py (for example, 0, 1 and 2 stand for pass, fail and new, respectively). |
Latest revision as of 15:21, 2 January 2019
Other languages: العربية Català Česky Deutsch English Español Français Italiano 日本語 한국어 Polski Português Português do Brasil Русский Slovenčina 中文
Testing Inkscape
Inkscape is a young project and the emphasis is still on adding features. Nonetheless it is gratifying that the stability of Inkspace has been steadily rising with each release.
The most important part of 'Testing' is simply to use Inkscape for normal work -- confirming that Inkscape has reached this level of maturity, exercise the new features and verify that the application works as expected.
Report a bug if you find anything that does not behave as it should. A bug report should include at least a step-by-step description of how to trigger the bug and/or a test file that demonstrates the bug (the smaller/more focussed the test file the better).
If you are one of the brave ones who wants to help testing the very latest experimental development version, this is very welcome. Information on where to download this version can be found at the page Installing Inkscape.
Users
The field is wide open. We are keen to receive bug reports and feature requests (in the form of a bug report). These often require analysis, clarification and further action. Anyone can do this.
Better still would be to provide patches for any part of the application that is not up to the standard you expect - it is confirmation that the project is evolving. Note that serious testing should be done with an 'unstable' build, either one that you made yourself (see CompilingInkscape), or a snapshot that you have downloaded. We would also like to hear about areas in which we do not have parity with comparable applications. If you find that you are coming up with interesting ideas concerning shortcomings in Inkscape, or plans for its future, get involved with the Inkscape testers group.
We need people to create and update documentation, online help, tutorials and screen shots. Noting defects in these is a perfectly valid form of testing - we do not want releases to go out with obsolete documentation.
Inkscape Testers
A community of Inkscape testers has grown up which has its own mailing list, and it is to be hoped that this will spearhead all work on usability and human factors. This group should be your first port of call for these areas:
- ComplianceTesting
- Inkscape Regression Test
- InteroperabilityTesting
- UsabilityTesting
- PerformanceTesting
- HIG compliance
See also TestingFramework. Note: Bryce? Jon? shouldn't the whole of that page be merged here? Or is it better to have this info in two pieces. IMHO wiki pages should not be made too long.
Rendering tests
In addition, Inkscape has rendering tests that do not necessarily need a developer to create, run and analyze. The actual tests can be found in SVN (see below). Below you can find information on how to run and create these tests yourself.
See this list for up-to-date results.
Running rendering tests
Apart from running low-level unit tests Inkscape can also be tested on a higher level (also see SVG Test Suite Compliance. Currently (2008-7-26) there is a rendering test tool (along with a few test cases) in SVN ([1]) which can be used to partially automate rendering tests.
To run the rendering tests:
- If needed, compile tester.cpp using 'g++ -o tester tester.cpp' (there is a precompiled .exe in SVN for Windows users).
- Execute runtests.py. If needed you can specify Inkscape's path and a few other things (execute 'runtests.py --help' to see which options are available).
Note that by default only a binary comparison between the output and reference files is used, perceptualdiff (or any other comparison tool that returns zero on success and 1 on failure) can be used to aid comparison of images (see the available options of runtests.py). Note that perceptualdiff (1.0.2) had some problems with transparency, these might be solved by now, and if not, there is a patch in its patch tracker.
To select a subset of tests to perform, specify one or more patterns (with Unix-style wildcards) on the command line. Each pattern is interpreted as specifying a prefix. For example, 'runtest.py bugs' will match any tests whose path relative to the directory with test cases starts with 'bugs' (for example: 'bugsy.svg' or 'bugs/bugXYZ.svg').
The most common test results are:
- Pass (the output file was matched to a pass reference)
- Fail (the output file was matched to a fail reference)
- New (the output file was not matched to any reference)
- No references (there were no references at all)
runtests.py puts the output files in a subdirectory 'output' (at the same level as the 'testcases' and 'references' directories).
Creating rendering tests
Just put an SVG file in the 'testcases' directory (subdirectories can be used for organizing the tests).
To add a pass/fail reference, just put it in the corresponding location under references/pass or references/fail. References are matched by prefix, so any reference that has the original name (without its extension) as a prefix is seen as a reference for that file.
Fail references are used to distinguish between a result that is known wrong and a result that is just (perhaps only slightly) different from the correct rendering. If you are unable to create a pass reference you can even give just a fail reference.
It is also possible to create an SVG file that should produce the exact same output as a test case but uses simpler (or just different) methods. This practice is suggested in the SVG Conformance Test Plan. For example, if the test case file is called 'testcases/basic/foo.svg' you could create a "patch" file called 'testcases/basic/foo-patch.svg'. runtests.py would then use Inkscape to create a pass reference file from that (as 'references/pass/basic/foo-patch.svg') and use it as one of the references. (Note that this reference should in general not be committed to SVN.)
Developers
Build report
There is an 'inkscape build report; which is sent regularly to the inkscape-tester list (and periodically to the developer list, when new problems are seen) that gives a count of warnings spotted in the code.
- Smoketests
- Defects in the build system
Running unit tests
There are now some unit tests which should be performed before checking in. These may take some time to complete, and so this cannot be made a requirement for each build (Test Driven Development), nonetheless everyone is on their honour not to 'break the build' by committing code that does not pass these tests. You can execute them by:
- Linux: Just run 'make check', it will build and run them. It should also work on Mac OS X.
- Windows: Use 'buildtool check' (where buildtool is built using 'g++ -O3 -o buildtool buildtool.cpp) to build and run the unit tests. Alternatively you can use dist-all-check to build everything AND run the unit tests.
Cxxtests will generate two (more or less equivalent) result files, an XML file and a text file with the extension 'log'. On Linux, those files are located in (buildpath)/src.
Creating unit tests
Inkscape uses the CxxTest framework. To enhance, modify or extend existing unit tests, just edit the existing test file (....-test.h).
The easiest way to create a new test in a directory which already has some unit tests is to simply copy one of the existing test files, strip it (remove anything specific and rename the class, constructors, etc.) and add some test methods. Take the time to look at the different ASSERT statements CxxTest supports, the TSM_ variants can be especially useful for example when you want to test a lot of different cases. Important: to make everything build correctly you have to do the following:
- Add the file to the right (already existing) group in the cxxtest target in build.xml
- Append the file to the CXXTEST_TESTSUITES variable in dir/Makefile_insert. Watch the backslashes at the end of the lines. Note that you have to prefix "$(srcdir)/dir/" to your filename, since it is a normal variable not handled by Automake, and you have to use += rather than =, because you're appending to this variable rather than defining it.
# Do like this: CXXTEST_TESTSUITES += \ $(srcdir)/dir/first-test.h \ $(srcdir)/dir/second-test.h
For creating a unit test in a directory which does not have any unit tests yet:
- Update the Windows build system:
- Add a cxxtestpart group to the cxxtest target in build.xml (just copy and modify an existing one).
- Add the corresponding .o file to the exclude list of the lib target in build.xml and to the include list of the linkcxxtests target.
- For Unix, no changes other than adding the file to CXXTEST_TESTSUITES in Makefile_insert are necessary.
Running tests unattended
For unit tests this is no problem, just set up something that runs cxxtests and you can use one of the log files it creates to see how it went.
To be able to run the rendering tests unattended on Windows you have to compile Inkscape as a commandline executable to prevent any CRT runtime error dialog boxes (or something similar) from popping up. On Linux and other Unices, this problem doesn't exist.
The teststatus.json file that is generated by runtests.py contains all the test results (in JSON format). Note that if you only run a subset of the tests this file retains all the information on tests that do not fall into that subset. It also retains old test results. The result codes in this file can be interpreted as in runtests.py (for example, 0, 1 and 2 stand for pass, fail and new, respectively).
Analyzing test coverage
To see how well the (unit) tests cover certain parts of the code, or to compare the coverage of rendering tests vs. unit tests, gcov can be used. See Profiling for more information on how to use gcov and coverage.py (a tool to get some grip on the massive amounts of data gcov can generate).