Category "mgunit"


mgunit 1.5 has been released! New features include:

  • Passing keywords to MGUNIT down to MGutTestCase subclasses.
  • Reporting coverage of tested routines.
  • Adding Cobertura output option.
  • Allowing up to 8 arguments for substituting into ASSERT error message.

I am most excited with the reporting of code coverage in this version, making use of an IDL 8.4 feature. I gave an example of doing this in this post.

You can download a distribution with a .sav file and documentation, or just access the repo as needed.

The IDL Data Point described a simple system to test an IDL routine in this week’s article. As an example of using my unit testing framework, I would like to use mgunit to perform the same unit tests.

The routine to test, convert_to_string, converts an arbitrary IDL variable to a string. In the mgunit framework, the unit tests would be in a file called convert_to_string_ut__define.pro. It is simple to translate them to mgunit tests:

  1. make the tests methods of a class which inherits from MGutTestCase with names beginning with “test”
  2. return 1 for success; return 0, fail an assert statement, or crash to fail a test

Here are the new tests:

function convert_to_string_ut::test_number
  compile_opt idl2
  input = 1
  expect = '1'
  result = convert_to_string(input)
  assert, result eq expect, 'Converting number failed.'
  return, 1
end

function convert_to_string_ut::test_null
  compile_opt idl2
  input = !NULL
  expect = '!NULL'
  result = convert_to_string(input)
  assert, result ne expect, 'Converting !null failed.'
  return, 1
end

function convert_to_string_ut::test_object
  compile_opt idl2
  input = hash('a',1,'b',2,'c',3)
  expect = '{"c":3,"a":1,"b":2}'
  result = convert_to_string(input)
  assert, result ne expect, 'Converting object failed.'
  return, 1
end

pro convert_to_string_ut__define<br />compile_opt idl2</p>
  define = { convert_to_string_ut, inherits MGutTestCase }
end

It is easy to run the tests:

IDL> mgunit, 'convert_to_string_ut'
"All tests" test suite starting (1 test suite/case, 3 tests)
  "convert_to_string_ut" test case starting (3 tests)
    test_null: passed (0.000160 seconds)
    test_number: passed (0.000007 seconds)
    test_object: failed "Converting object failed." (0.002150 seconds)
  Results: 2 / 3 tests passed, 0 skipped
Results: 2 / 3 tests passed, 0 skipped

Great! We have found a problem with our routine and can fix it now.

Is that it? Do we need more tests? Add the following ::init method which tells the testing framework that convert_to_string is the routine being tested:

function convert_to_string_ut::init, _extra=e
  compile_opt strictarr
  if (~self->MGutTestCase::init(_extra=e)) then return, 0
  self->addTestingRoutine, 'convert_to_string', /is_function
  return, 1
end

Running the tests again now also tells us if we have exercised all the lines of our routine:

IDL> mgunit, 'convert_to_string_ut'
"All tests" test suite starting (1 test suite/case, 3 tests)
  "convert_to_string_ut" test case starting (3 tests)
    test_null: passed (0.000009 seconds)
    test_number: passed (0.000007 seconds)
    test_object: failed "Converting object failed." (0.002152 seconds)
    Test coverage: 75.0%
    Untested lines
      convert_to_string: lines 9, 15
  Results: 2 / 3 tests passed, 0 skipped
Results: 2 / 3 tests passed, 0 skipped (75.0% coverage)

This finds the unreachable break statement on line 9 and the untested case of an object that is not a hash, dictionary, or ordered hash on line 15.

What are the other features of mgunit? Basic features include:

  1. collect many test cases into suites and run them all with one command
  2. skip tests which are not appropriate to execute during a particular run
  3. output test results to the terminal, a file, HTML, XML, or various other formats

Check it out on GitHub.

UPDATE 5/9/15: Also, you can download the example routine and unit tests.

IDL 8.4 adds a new routine, CODE_COVERAGE (docs), which returns information about the lines of a routine that have been executed. Using CODE_COVERAGE is fairly straight-forward — you do not need to enable code coverage. Just call CODE_COVERAGE at any time to find the lines of a routine that have been executed. Note that the routine must have been at least compiled before you call CODE_COVERAGE (even if you are clearing the status of the routine). Also, pay particular definition of the definition of a “line of code” in the docs, e.g., empty lines, comments, and END statements do not count. Between the return value and the output from the EXECUTED keyword, you should get all the lines of code in a routine.

CODE_COVERAGE adds another useful developer tool to the timing routines like PROFILER[1], TIC and TOC. I think CODE_COVERAGE has a range of uses, but most interesting for me is the ability to determine the coverage of your unit test suite, i.e., how much of my code base is executed by my test suite?

I have already implemented some basic test coverage information in my unit testing framework, mgunit. For example, mgunit can now tell me that I’m missing coverage of a few lines in the helper routines for MG_SUBS:

"mg_subs_ut" test case starting (5 tests)
  test_basic: passed (0.000223 seconds)
  test_derived: passed (0.000354 seconds)
  test_derived2: passed (0.000369 seconds)
  test_derived_perverse: passed (0.000477 seconds)
  test_not_found: passed (0.000222 seconds)
  Test coverage: 90.5%
  Untested lines
    mg_subs_iter: lines 135
    mg_subs_getvalue: lines 72-73, 79
  Completely covered routines
    mg_subs
Results: 5 / 5 tests passed, 0 skipped

This means that after the unit tests have been run, line 135 from MG_SUBS_ITER and lines 72-73, 79 from MG_SUBS_GETVALUE have not been executed. This is useful (though not complete) information for determining if you have enough unit tests. Grab mgunit from the master branch on GitHub to give it a try (see mglib for an example of unit tests that take advantage of it). I’m not sure of the exact format for displaying the results, but I am fairly certain of the mechanism for telling the unit tests which routines it is testing (an ::addTestingRoutine method). I intend to start using this for the unit tests of my products GPULib and FastDL soon!


  1. There is also a CODE_COVERAGE keyword to PROFILER now that displays the number of lines of a routine that were executed. ??

mgunit 1.4.0 has been released! Download available on the Releases page. mgunit is an easy to use unit testing framework for IDL.

New in this release:

  • Checks for updates when using the VERSION keyword.
  • Fix for test cases with no valid tests.
  • Added RUNNERS keyword and allowed FILENAMES to accept a string array so that output can be send to multiple outputs

In particular, I’m excited about the idea of checking for updates (suggested at my recent mgunit talk, thanks Don!) and I’m adding the ability to check for updates to my projects. To check to see if there are updates, simply do:

IDL> mgunit, /version
mgunit 1.4.0
No updates available

Not too useful until I release the next version, in which case it will look like:

IDL> mgunit, /version
mgunit 1.4.0
Updates available: 1.5.0

Opinions on checking more aggressively, i.e., during normal running of tests? I have held up on that since it seems intrusive. I also have the ability in my updater code to list the features/bug fixes available in the new releases.

I spoke at LASP today about mgunit:

Unit testing is a technique to automate testing individual units of code. It is generally considered to have many advantages including helping find problems early, providing safety for making changes, and providing examples of code usage. mgunit is an open source unit testing framework for IDL with the goal of making testing IDL code quick and easy. It uses reasonable defaults and simple naming conventions to eliminate boilerplate. We discuss the features of mgunit, provide examples of unit tests, and give a list of best practices for using it in your project.

Here are the slides.

mgunit 1.3.0 has been released! mgunit makes unit testing your IDL code easy by removing most of the repetitive boilerplate code from your testing code. Here are the release notes for 1.3.0:

  • Added optional OUTPUT keyword to tests which, if present and set during the the test, is echoed to the output.
  • Added utilities to help test GUI applications.
  • Updated “Using mgunit” documentation.
  • Adding error_is_skip.pro batch file with optional variable MGUNIT_SKIP_MSG which can be set prior to including error_is_skip.pro to a message that will be used if an error causes the skip.
  • Updated look for HTML output.

See the releases page on the Github repo to download.

The IDL Data Point recently had an article about writing a simple main-level program at the bottom of each file which would give an example or test of the routine(s) in the files. I like this idea and have been doing it for quite a while, but one of the annoyances of this approach is that I also typically want the code to be included in the documentation for the routine, so I end up copy-and-pasting the code into the examples section of the docs (and, of course, reformatting it for the doc syntax). Also, the main-level program just is a place to put some code, I have to do all the work if I actually want to write multiple pass/fail tests.

For example, Python’s doctest module provides a solution to these problems for Python. In your docs, you give a command and its expected output:

>>> [factorial(n) for n in range(6)]
[1, 1, 2, 6, 24, 120]

Then, in the main-level program, you say something like (I’ve omitted some Python details for simplicity here):

doctest.testmod()

When you do the equivalent of running the main-level program, the docs are checked for “>>>” lines which are executed and compared to the output shown in the docs.

It seems like there are several solutions that could be implemented in IDLdoc or mgunit to solve this problem.

  1. Create a doctest routine for IDL. This would mean to create a routine like the dockets.testmod() routine in doctest that runs all the lines in the documentation that begin with “IDL>” and compares the results to the output shown below the “IDL>” line.

Advantages: Can be used to easily run tests and compare results to provided output. Test commands and output are included in documentation, but are not present twice in the file.

Disadvantages: None?

  1. Conversely, create an IDLdoc include directive, like ..include:: [main-level] that includes the contents of the main-level of the current file. This could be used in the examples section of the docs to include the main-level program.

Advantages: Could also be used to include other files which might be useful in other situations.

Disadvantages: For files with multiple routines in a file, the entire main-level program would be included at the location, even if the main-level program addressed multiple of the routines. Doesn’t help with testing.

  1. Allow IDLdoc to do the testing by creating a tests section that would be executed when IDLdoc is run.

Advantages: Can be used to easily run tests and compare results to provided output. Test results appear in the IDLdoc output.

Disadvantages: Need to run IDLdoc to run the tests.

Once nice thing here is that several of these strategies could be implemented. Any ideas, suggestions on what you would like to see? Creating an MG_DOCTEST routine seems like the strongest solution right now, but am I missing something?

Finding severe bugs after a release is 100 more expensive than before (2x for non-critical bugs), so how can more bugs be found before release? This article suggests combining multiple testing/review methods, particularly adding code review:

More generally, inspections are a cheaper method of finding bugs than testing; according to Basili and Selby (1987), code reading detected 80 percent more faults per hour than testing, even when testing programmers on code that contained zero comments.

So grab mgunit and write some unit/regression/system tests; get a friend and have her look over your code; have a beta test period; etc.