IDL 8.4 adds a new routine, CODE_COVERAGE (docs), which returns information about the lines of a routine that have been executed. Using CODE_COVERAGE is fairly straight-forward — you do not need to enable code coverage. Just call CODE_COVERAGE at any time to find the lines of a routine that have been executed. Note that the routine must have been at least compiled before you call CODE_COVERAGE (even if you are clearing the status of the routine). Also, pay particular definition of the definition of a “line of code” in the docs, e.g., empty lines, comments, and END statements do not count. Between the return value and the output from the EXECUTED keyword, you should get all the lines of code in a routine.

CODE_COVERAGE adds another useful developer tool to the timing routines like PROFILER1, TIC and TOC. I think CODE_COVERAGE has a range of uses, but most interesting for me is the ability to determine the coverage of your unit test suite, i.e., how much of my code base is executed by my test suite?

I have already implemented some basic test coverage information in my unit testing framework, mgunit. For example, mgunit can now tell me that I’m missing coverage of a few lines in the helper routines for MG_SUBS:

"mg_subs_ut" test case starting (5 tests)
  test_basic: passed (0.000223 seconds)
  test_derived: passed (0.000354 seconds)
  test_derived2: passed (0.000369 seconds)
  test_derived_perverse: passed (0.000477 seconds)
  test_not_found: passed (0.000222 seconds)
Test coverage: 90.5%
  Untested lines
    mg_subs_iter: lines 135
    mg_subs_getvalue: lines 72-73, 79
  Completely covered routines
Results: 5 / 5 tests passed, 0 skipped

This means that after the unit tests have been run, line 135 from MG_SUBS_ITER and lines 72-73, 79 from MG_SUBS_GETVALUE have not been executed. This is useful (though not complete) information for determining if you have enough unit tests. Grab mgunit from the master branch on GitHub to give it a try (see mglib for an example of unit tests that take advantage of it). I’m not sure of the exact format for displaying the results, but I am fairly certain of the mechanism for telling the unit tests which routines it is testing (an ::addTestingRoutine method). I intend to start using this for the unit tests of my products GPULib and FastDL soon!

  1. There is also a CODE_COVERAGE keyword to PROFILER now that displays the number of lines of a routine that were executed. 

IDL 8.4 was released today with a slew of new features. Check out What’s New in IDL 8.4 for a list of the new features.

Most interesting feature for me: code coverage. I am going to explore this a bit to see if I can get mgunit to report what code has been tested (and, more importantly, not tested) by your test suite.

Stay tuned for more detailed information about the new features!

Paulo Penteado has updated his Building cross-platform IDL runtime applications article with an über-installation for IDL 8.3 on all current platforms:

I created a package for IDL 8.3. It contains all the files in the current manifest_rt.txt file, which cover all the current platforms: Linux (x86_64), Windows (x86 and x86_64), Mac (x86_64) and Solaris (x86_64 and sparc64).

The über-installation (download) allows MAKE_RT to make all-inclusive IDL runtime applications that work on all platforms.

Greg Wilson gave a great talk about Software Carpentry at SciPy this year. I think more efforts like the Software Carpentry seminars are greatly needed in science — I’ve mentioned Software Carpentry several times before.

If you are interested in teaching, he highly recommends the book How Learning Works. It gives a summary of the current research in learning with links to the primary sources. I wish I had that when I was teaching.

via Astronomy Computing Today

Maps of floating pastic

The National Geographic has created new maps showing the extent of floating plastic in the ocean:

Tens of thousands of tons of plastic garbage float on the surface waters in the world’s oceans, according to researchers who mapped giant accumulation zones of trash in all five subtropical ocean gyres. Ocean currents act as “conveyor belts,” researchers say, carrying debris into massive convergence zones that are estimated to contain millions of plastic items per square kilometer in their inner cores.

Two ships covered in the world in nine months to collect this data.

via FlowingData

When writing even small applications, it is often necessary to distribute resource files along with your code. For example, images and icons are frequently needed by GUI applications. Custom color table files or fonts might be needed by applications that create visualizations. Defaults might be stored in other data files. But how do you find these files, when the user could have installed your application anywhere on their system?

Continue reading “Finding your resource files.”

ExelisVIS annonuced VISualize 2014 will focus on the following topics:

Presentations and discussions will focus on topics such as:

  • Using new data platforms such as UAS, microsatellites, and SAR sensors
  • Remote sensing solutions for precision agriculture
  • Drought, flood, and extreme precipitation event monitoring and assessment
  • Wildfire and conservation area monitoring, management, mitigation, and planning
  • Monitoring leaks from natural gas pipelines

See the video for more information and then register or submit an abstract.

UPDATE 9/18/14: postponed until 2015.

I’ve been dealing with HDF 5 files for quite awhile, but IDL interface was as painful as the C interface. It did have H5_BROWSER and H5_PARSE to make things a bit easier, but these utilities are relevant for interactive browsing of a dataset and not for efficient, programmatic access. I created a set of routines for dealing with HDF 5 files that I have been extending as needed to other scientific data formats such as netCDF, HDF 4, and IDL Savefiles.

Continue reading “Scientific data file format routines.”

Recently, I have been writing a fairly large and generic system for ingesting various satellite images onto a common grid and producing user specified plots and reports from the results. Control of the system is done via a configuration file, like this one, which has been a great, flexible way to handle users extending and controlling the system. But reading the recent IDL Data Point article about Jim Pendleton’s DebuggerHelper class reminded me how useful a logging framework is for medium to large sized projects.

I use mg_log as my logging utility. It is simple to use, but has some powerful features for filtering and customizing output. It has five levels (debug, informational, warning, error, and critical) of messages which match the overall system level. This allows you to filter messages based on severity, e.g., during development you can set the logging level to “debug” and then all messages will appear. Later, when you have deploy the system, users may find setting the level to “warning” (which does not show the debug and informational messages) to be more appropriate.

See this article to learn more about mg_log basics. This sample log shows what the typical output looks like, though the format for each line is completely configurable. mg_log is available on GitHub in my mglib repo.

I have had multiple occasions where I needed to quickly generate bindings to an existing C library. The repetitive nature of creating these bindings calls out for a tool to automate this tool. For this purpose, I have written a class, MG_DLM, that allows:

  1. creating wrapper binding for routines from a header prototype declaration (with some limitations from standard C)
  2. creating routines which access variables and pound defines
  3. allow adding custom routines written by the developer

I have used MG_DLM to create bindings for the GNU Scientific Library (GSL), CULA, MAGMA, and even IDL itself.

Continue reading “Automatically generating IDL bindings.”

« newer postsolder posts »