Category "mgunit"


[mgunit][repo] 1.5 has been released! New features include:

* Passing keywords to `MGUNIT` down to `MGutTestCase` subclasses.

* Reporting coverage of tested routines.

* Adding Cobertura output option.

* Allowing up to 8 arguments for substituting into `ASSERT` error message.

I am most excited with the reporting of code coverage in this version, making use of an IDL 8.4 feature. I gave an example of doing this in this [post].

You can [download] a distribution with a `.sav` file and documentation, or just access the [repo] as needed.

[repo]: https://github.com/mgalloy/mgunit/ "mgalloy/mgunit"
[download]: https://github.com/mgalloy/mgunit/wiki/Releases "mgunit releases"
[post]: http://michaelgalloy.com/2015/05/07/mgunit-unit-testing-in-idl.html "mgunit: unit testing in IDL"

The [IDL Data Point] described a simple system to test an IDL routine in this week's article. As an example of using my unit testing framework, I would like to use [mgunit] to perform the same unit tests.

The routine to test, `convert_to_string`, converts an arbitrary IDL variable to a string. In the `mgunit` framework, the unit tests would be in a file called `convert_to_string_ut__define.pro`. It is simple to translate them to `mgunit` tests:

1. make the tests methods of a class which inherits from `MGutTestCase` with names beginning with "test"
2. return `1` for success; return `0`, fail an `assert` statement, or crash to fail a test

Here are the new tests:

function convert_to_string_ut::test_number
compile_opt idl2

input = 1
expect = '1'
result = convert_to_string(input)

assert, result eq expect, 'Converting number failed.'
return, 1
end

function convert_to_string_ut::test_null
compile_opt idl2

input = !NULL
expect = '!NULL'
result = convert_to_string(input)

assert, result ne expect, 'Converting !null failed.'
return, 1
end

function convert_to_string_ut::test_object
compile_opt idl2

input = hash('a',1,'b',2,'c',3)
expect = '{"c":3,"a":1,"b":2}'
result = convert_to_string(input)

assert, result ne expect, 'Converting object failed.'
return, 1
end

pro convert_to_string_ut__define
compile_opt idl2

define = { convert_to_string_ut, inherits MGutTestCase }
end

It is easy to run the tests:

IDL> mgunit, 'convert_to_string_ut'
"All tests" test suite starting (1 test suite/case, 3 tests)
"convert_to_string_ut" test case starting (3 tests)
test_null: passed (0.000160 seconds)
test_number: passed (0.000007 seconds)
test_object: failed "Converting object failed." (0.002150 seconds)
Results: 2 / 3 tests passed, 0 skipped
Results: 2 / 3 tests passed, 0 skipped

Great! We have found a problem with our routine and can fix it now.

Is that it? Do we need more tests? Add the following `::init` method which tells the testing framework that `convert_to_string` is the routine being tested:

function convert_to_string_ut::init, _extra=e
compile_opt strictarr

if (~self->MGutTestCase::init(_extra=e)) then return, 0

self->addTestingRoutine, 'convert_to_string', /is_function

return, 1
end

Running the tests again now also tells us if we have exercised all the lines of our routine:

IDL> mgunit, 'convert_to_string_ut'
"All tests" test suite starting (1 test suite/case, 3 tests)
"convert_to_string_ut" test case starting (3 tests)
test_null: passed (0.000009 seconds)
test_number: passed (0.000007 seconds)
test_object: failed "Converting object failed." (0.002152 seconds)
Test coverage: 75.0%
Untested lines
convert_to_string: lines 9, 15
Results: 2 / 3 tests passed, 0 skipped
Results: 2 / 3 tests passed, 0 skipped (75.0% coverage)

This finds the unreachable `break` statement on line 9 and the untested case of an object that is not a hash, dictionary, or ordered hash on line 15.

What are the other features of `mgunit`? Basic features include:

1. collect many test cases into suites and run them all with one command
2. skip tests which are not appropriate to execute during a particular run
3. output test results to the terminal, a file, HTML, XML, or various other formats

Check it out on [GitHub][mgunit].

UPDATE 5/9/15: Also, you can download the example [routine] and [unit tests].

[mgunit]: https://github.com/mgalloy/mgunit/ "mgalloy/mgunit"
[IDL Data Point]: http://www.exelisvis.com/Company/PressRoom/Blogs/TabId/836/ArtMID/2928/ArticleID/14426/Unit-Testing-in-IDL.aspx?utm_source=twitter&utm_medium=social&utm_campaign=blog "Unit testing in IDL"
[unit tests]: http://michaelgalloy.com/wp-content/uploads/2015/05/convert_to_string_ut__define.pro "convert_to_string unit tests"
[routine]: http://michaelgalloy.com/wp-content/uploads/2015/05/convert_to_string.pro "convert_to_string"

IDL 8.4 adds a new routine, `CODE_COVERAGE` ([docs]), which returns information about the lines of a routine that have been executed. Using `CODE_COVERAGE` is fairly straight-forward — you do not need to enable code coverage. Just call `CODE_COVERAGE` at any time to find the lines of a routine that have been executed. Note that the routine must have been at least compiled before you call `CODE_COVERAGE` (even if you are clearing the status of the routine). Also, pay particular definition of the definition of a "line of code" in the [docs], e.g., empty lines, comments, and `END` statements do not count. Between the return value and the output from the `EXECUTED` keyword, you should get all the lines of code in a routine.

`CODE_COVERAGE` adds another useful developer tool to the timing routines like `PROFILER`[^1], `TIC` and `TOC`. I think `CODE_COVERAGE` has a range of uses, but most interesting for me is the ability to determine the coverage of your unit test suite, i.e., how much of my code base is executed by my test suite?

I have already implemented some basic test coverage information in my unit testing framework, [mgunit]. For example, mgunit can now tell me that I'm missing coverage of a few lines in the helper routines for `MG_SUBS`:

"mg_subs_ut" test case starting (5 tests)
test_basic: passed (0.000223 seconds)
test_derived: passed (0.000354 seconds)
test_derived2: passed (0.000369 seconds)
test_derived_perverse: passed (0.000477 seconds)
test_not_found: passed (0.000222 seconds)
Test coverage: 90.5%
Untested lines
mg_subs_iter: lines 135
mg_subs_getvalue: lines 72-73, 79
Completely covered routines
mg_subs
Results: 5 / 5 tests passed, 0 skipped

This means that after the unit tests have been run, line 135 from `MG_SUBS_ITER` and lines 72-73, 79 from `MG_SUBS_GETVALUE` have not been executed. This is useful (though not complete) information for determining if you have enough unit tests. Grab [mgunit] from the master branch on GitHub to give it a try (see [mglib] for an example of unit tests that take advantage of it). I'm not sure of the exact format for displaying the results, but I am fairly certain of the mechanism for telling the unit tests which routines it is testing (an `::addTestingRoutine` method). I intend to start using this for the unit tests of my products [GPULib] and [FastDL] soon!

[docs]: http://www.exelisvis.com/docs/CODE_COVERAGE.html "CODE_COVERAGE (IDL Reference)"
[mgunit]: http://github.com/mgalloy/mgunit "mgalloy/mgunit"
[mglib]: http://github.com/mgalloy/mglib "mgalloy/mglib"
[GPULib]: http://www.txcorp.com/home/gpulib "GPULib"
[FastDL]: http://www.txcorp.com/fastdl "FastDL"

[^1]: There is also a `CODE_COVERAGE` keyword to `PROFILER` now that displays the number of lines of a routine that were executed.

[mgunit] 1.4.0 has been released! Download available on the [Releases] page. mgunit is an easy to use unit testing framework for IDL.

New in this release:

* Checks for updates when using the `VERSION` keyword.

* Fix for test cases with no valid tests.

* Added `RUNNERS` keyword and allowed `FILENAMES` to accept a string array so that
output can be send to multiple outputs

In particular, I'm excited about the idea of checking for updates (suggested at my recent [mgunit talk], thanks Don!) and I'm adding the ability to check for updates to my projects. To check to see if there are updates, simply do:

IDL> mgunit, /version
mgunit 1.4.0
No updates available

Not too useful until I release the next version, in which case it will look like:

IDL> mgunit, /version
mgunit 1.4.0
Updates available: 1.5.0

Opinions on checking more aggressively, i.e., during normal running of tests? I have held up on that since it seems intrusive. I also have the ability in my updater code to list the features/bug fixes available in the new releases.

[mgunit]: https://github.com/mgalloy/mgunit "mgalloy/mglib"
[Releases]: https://github.com/mgalloy/mgunit/wiki/Releases "Releases - mgalloy/mgunit Wiki"
[mgunit talk]: http://michaelgalloy.com/2013/11/11/unit-testing-idl.html "mgunit: Unit testing with IDL"

I spoke at [LASP] today about [mgunit]:

> Unit testing is a technique to automate testing individual units of code. It is generally considered to have many advantages including helping find problems early, providing safety for making changes, and providing examples of code usage. mgunit is an open source unit testing framework for IDL with the goal of making testing IDL code quick and easy. It uses reasonable defaults and simple naming conventions to eliminate boilerplate. We discuss the features of mgunit, provide examples of unit tests, and give a list of best practices for using it in your project.

Here are the [slides].

[LASP]: http://lasp.colorado.edu/home "Laboratory for Atmospheric and Space Physics"
[mgunit]: https://github.com/mgalloy/mgunit "Github: mgalloy/mgunit"
[slides]: http://michaelgalloy.com/wp-content/uploads/2013/11/testing-with-idl.pdf "Unit Testing with IDL slides"

mgunit 1.3.0 has been [released][releases]! mgunit makes unit testing your IDL code easy by removing most of the repetitive boilerplate code from your testing code. Here are the release notes for 1.3.0:

* Added optional `OUTPUT` keyword to tests which, if present and set during the
the test, is echoed to the output.
* Added utilities to help test GUI applications.
* Updated "Using mgunit" documentation.
* Adding `error_is_skip.pro` batch file with optional variable `MGUNIT_SKIP_MSG`
which can be set prior to including `error_is_skip.pro` to a message that will
be used if an error causes the skip.
* Updated look for HTML output.

See the [releases] page on the [Github repo] to download.

[releases]: https://github.com/mgalloy/mgunit/wiki/Releases "mgunit releases"
[Github repo]: https://github.com/mgalloy/mgunit "mgunit on Github"

[example-main]: http://idldatapoint.com/2012/05/03/the-merits-of-an-example-main-program "IDL Data Point: The merits of an example main program"
[doctest]: http://docs.python.org/library/doctest.html "Python doctest module"

The IDL Data Point recently had an [article][example-main] about writing a simple main-level program at the bottom of each file which would give an example or test of the routine(s) in the files. I like this idea and have been doing it for quite a while, but one of the annoyances of this approach is that I also typically want the code to be included in the documentation for the routine, so I end up copy-and-pasting the code into the examples section of the docs (and, of course, reformatting it for the doc syntax). Also, the main-level program just is a place to put some code, I have to do all the work if I actually want to write multiple pass/fail tests.

Continue reading "Doc testing thoughts."

Finding severe bugs after a release is 100 more expensive than before (2x for non-critical bugs), so how can more bugs be found before release? This [article][testing] suggests combining multiple testing/review methods, particularly adding code review:

> More generally, inspections are a cheaper method of finding bugs than testing; according to Basili and Selby (1987), code reading detected 80 percent more faults per hour than testing, even when testing programmers on code that contained zero comments.

So grab [mgunit][mgunit] and write some unit/regression/system tests; get a friend and have her look over your code; have a beta test period; etc.

[testing]: http://kev.inburke.com/kevin/the-best-ways-to-find-bugs-in-your-code/ "The best ways to find bugs in your code"
[mgunit]: http://mgunit.idldev.com "mgunit Trac site"