IDL 8.4 adds a new routine, `CODE_COVERAGE` ([docs]), which returns information about the lines of a routine that have been executed. Using `CODE_COVERAGE` is fairly straight-forward — you do not need to enable code coverage. Just call `CODE_COVERAGE` at any time to find the lines of a routine that have been executed. Note that the routine must have been at least compiled before you call `CODE_COVERAGE` (even if you are clearing the status of the routine). Also, pay particular definition of the definition of a "line of code" in the [docs], e.g., empty lines, comments, and `END` statements do not count. Between the return value and the output from the `EXECUTED` keyword, you should get all the lines of code in a routine. `CODE_COVERAGE` adds another useful developer tool to the timing routines like `PROFILER`[^1], `TIC` and `TOC`. I think `CODE_COVERAGE` has a range of uses, but most interesting for me is the ability to determine the coverage of your unit test suite, i.e., how much of my code base is executed by my test suite? I have already implemented some basic test coverage information in my unit testing framework, [mgunit]. For example, mgunit can now tell me that I'm missing coverage of a few lines in the helper routines for `MG_SUBS`: "mg_subs_ut" test case starting (5 tests) test_basic: passed (0.000223 seconds) test_derived: passed (0.000354 seconds) test_derived2: passed (0.000369 seconds) test_derived_perverse: passed (0.000477 seconds) test_not_found: passed (0.000222 seconds) Test coverage: 90.5% Untested lines mg_subs_iter: lines 135 mg_subs_getvalue: lines 72-73, 79 Completely covered routines mg_subs Results: 5 / 5 tests passed, 0 skipped This means that after the unit tests have been run, line 135 from `MG_SUBS_ITER` and lines 72-73, 79 from `MG_SUBS_GETVALUE` have not been executed. This is useful (though not complete) information for determining if you have enough unit tests. Grab [mgunit] from the master branch on GitHub to give it a try (see [mglib] for an example of unit tests that take advantage of it). I'm not sure of the exact format for displaying the results, but I am fairly certain of the mechanism for telling the unit tests which routines it is testing (an `::addTestingRoutine` method). I intend to start using this for the unit tests of my products [GPULib] and [FastDL] soon! [docs]: "CODE_COVERAGE (IDL Reference)" [mgunit]: "mgalloy/mgunit" [mglib]: "mgalloy/mglib" [GPULib]: "GPULib" [FastDL]: "FastDL" [^1]: There is also a `CODE_COVERAGE` keyword to `PROFILER` now that displays the number of lines of a routine that were executed.
IDL 8.4 was released today with a slew of new features. Check out [What's New in IDL 8.4] for a list of the new features. Most interesting feature for me: code coverage. I am going to explore this a bit to see if I can get [mgunit] to report what code has been tested (and, more importantly, not tested) by your test suite. Stay tuned for more detailed information about the new features! [What's New in IDL 8.4]: "What's New in IDL 8.4" [mgunit]: "mgunit on GitHub"
Paulo Penteado has updated his [Building cross-platform IDL runtime applications] article with an über-installation for IDL 8.3 on all current platforms: > I created a package for IDL 8.3. It contains all the files in the current manifest_rt.txt file, which cover all the current platforms: Linux (x86_64), Windows (x86 and x86_64), Mac (x86_64) and Solaris (x86_64 and sparc64). The über-installation ([download]) allows `MAKE_RT` to make all-inclusive IDL runtime applications that work on all platforms. [Building cross-platform IDL runtime applications]: "Building cross-platform IDL runtime applications" [download]: "IDL 8.3 uber-installation"
Greg Wilson gave a great [talk] about Software Carpentry at SciPy this year. I think more efforts like the Software Carpentry seminars are greatly needed in science — I've mentioned Software Carpentry [several] times [before]. If you are interested in teaching, he highly recommends the book *[How Learning Works]*. It gives a summary of the current research in learning with links to the primary sources. I wish I had that when I was teaching. via [Astronomy Computing Today] [talk]: "Software Carpentry: Lessons Learned | SciPy 2014 | Greg Wilson" [several]: "Best practices" [before]: "Software development for scientists" [How Learning Works]: "How Learning Works" [Astronomy Computing Today]:
![Maps of floating pastic][debris-map] The National Geographic has created new [maps] showing the extent of floating plastic in the ocean: > Tens of thousands of tons of plastic garbage float on the surface waters in the world's oceans, according to researchers who mapped giant accumulation zones of trash in all five subtropical ocean gyres. Ocean currents act as "conveyor belts," researchers say, carrying debris into massive convergence zones that are estimated to contain millions of plastic items per square kilometer in their inner cores. Two ships covered in the world in nine months to collect this data. via [FlowingData] [maps]: "First of Its Kind Map Reveals Extent of Ocean Plastic" [FlowingData]: "Mapping plastic in the ocean" [debris-map]:
When writing even small applications, it is often necessary to distribute resource files along with your code. For example, images and icons are frequently needed by GUI applications. Custom color table files or fonts might be needed by applications that create visualizations. Defaults might be stored in other data files. But how do you find these files, when the user could have installed your application anywhere on their system? The answer is to place these files in a directory that you know the location of relative to your source code. Then use [MG_SRC_ROOT][^1] (or one of the other alternatives to it) to determine the location of your source code. Finally, use `FILEPATH` to specify the location of the resource. For example, in `MG_LOADCT`, I do the following to find the Brewer color tables, which are in the same directory as `MG_LOADCT`: ctfilename = filepath('brewer.tbl', root=mg_src_root()) If I had placed the color tables in a *resources* subdirectory which was parallel to the *code* directory my source code lives in, I could just use the `SUBDIR` keyword: ctfilename = filepath('brewer.tbl', subdir=['..', 'resources'], $ root=mg_src_root()) `MG_SRC_ROOT` is one of my most used routines[^2]. Get all the source code for [mglib] on GitHub. [^1]: Before the `SCOPE_TRACEBACK` routine was introduced in IDL 6.2, `MG_SRC_ROOT` had to do ugly things like parse the output from `HELP`. [^2]: I count 89 uses of `MG_SRC_ROOT` in my library mglib and it is also used in [IDLdoc] and [mgunit]. [MG_SRC_ROOT]: "mglib/ at master" [mglib]: "mgalloy/mglib" [IDLdoc]: "mgalloy/idldoc" [mgunit]: "mgalloy/mgunit"
ExelisVIS annonuced VISualize 2014 will focus on the following topics: >Presentations and discussions will focus on topics such as: > >* Using new data platforms such as UAS, microsatellites, and SAR sensors >* Remote sensing solutions for precision agriculture >* Drought, flood, and extreme precipitation event monitoring and assessment >* Wildfire and conservation area monitoring, management, mitigation, and planning >* Monitoring leaks from natural gas pipelines See the [video] for more information and then [register] or [submit an abstract]. UPDATE 9/18/14: postponed until 2015. [video]: "Getting Ready for VISualize 2014" [register]: [submit an abstract]:
I've been dealing with [HDF 5] files for quite awhile, but IDL interface was as painful as the C interface. It did have `H5_BROWSER` and `H5_PARSE` to make things a bit easier, but these utilities are relevant for interactive browsing of a dataset and not for efficient, programmatic access. I created a [set of routines] for dealing with HDF 5 files that I have been extending as needed to other scientific data formats such as netCDF, HDF 4, and IDL Savefiles. [HDF 5]: "HDF Group - HDF5" [set of routines]: "Some HDF5 helper routines" For example, `MG_H5_GETDATA` can access variables stored in HDF 5 files: f = filepath('hdf5_test.h5', subdir=['examples', 'data']) arr = mg_h5_getdata(f, '/arrays/3D int array') Or slices of variables: slice = mg_h5_getdata(f, '/arrays/3D int array[3, 5:*:2, 0:49:3]') Or attributes: attr = mg_h5_getdata(f, '/images/Eskimo.CLASS') Similarly, `MG_H5_PUTDATA` can create and edit variables in HDF 5 files, while `MG_H5_DUMP` prints a listing of the variables and attributes of a file. I have adding corresponding routines for HDF 4, netCDF, and IDL Savefiles over the past few years. Since I have been dealing netCDF files more extensively recently, I have been adding to their routines, most notably I have created a `LIST` routine which provides an array of variable/attribute names available in a file. Here's the routines available in [mglib] currently: | Action | HDF 4 | HDF 5 | netCDF | Savefile | ---------- | ---------------- | --------------- | --------------- | ----------------- | `GETDATA`    | `MG_HDF_GETDATA`    | `MG_H5_GETDATA`    | `MG_NC_GETDATA`    | `MG_SAVE_GETDATA` | `PUTDATA` | `MG_HDF_PUTDATA` | `MG_H5_PUTDATA` | `MG_NC_PUTDATA` | | `DUMP` | `MG_HDF_DUMP` | `MG_H5_DUMP` | `MG_NC_DUMP` | `MG_SAVE_DUMP` | `LIST` | | | `MG_NC_LIST` | [mglib]: "mgalloy/mglib"
Recently, I have been writing a fairly large and generic system for ingesting various satellite images onto a common grid and producing user specified plots and reports from the results. Control of the system is done via a [configuration file], like [this one], which has been a great, flexible way to handle users extending and controlling the system. But reading the recent IDL Data Point article about Jim Pendleton's [DebuggerHelper] class reminded me how useful a logging framework is for medium to large sized projects. I use [mg_log] as my logging utility. It is simple to use, but has some powerful features for filtering and customizing output. It has five levels (debug, informational, warning, error, and critical) of messages which match the overall system level. This allows you to filter messages based on severity, e.g., during development you can set the logging level to "debug" and then all messages will appear. Later, when you have deploy the system, users may find setting the level to "warning" (which does not show the debug and informational messages) to be more appropriate. See [this article] to learn more about `mg_log` basics. [This sample log] shows what the typical output looks like, though the format for each line is completely configurable. `mg_log` is available on GitHub in my [mglib] repo. [this one]: "Sample configuration file" [configuration file]: "Reading configuration files" [DebuggerHelper]: "DebuggerHelper - A Handy Debugging Class for IDL Developers" [mg_log]: "Logging" [this article]: "Logging" [mglib]: "mgalloy/mglib" [This sample log]: "mg_log output"
I have had multiple occasions where I needed to quickly generate bindings to an existing C library. The repetitive nature of creating these bindings calls out for a tool to automate this tool. For this purpose, I have written a class, `MG_DLM`, that allows: 1. creating wrapper binding for routines from a header prototype declaration (with some limitations from standard C) 2. creating routines which access variables and pound defines 3. allow adding custom routines written by the developer I have used `MG_DLM` to create bindings for the [GNU Scientific Library] (GSL), [CULA], [MAGMA], and even IDL itself. [GNU Scientific Library]: "GSL - GNU Scientific Library" [CULA]: "CULA" [MAGMA]: "MAGMA" #### Define the routines For our example, let's make some basic random number generation routines from GSL available. The relevant routines from the *gsl_rng.h* header file that we would like to access from IDL are: const gsl_rng_type *gsl_rng_env_setup(void); gsl_rng *gsl_rng_alloc(const gsl_rng_type *T); INLINE_DECL double gsl_rng_uniform(const gsl_rng *r); We need to write a modified header file that `MG_DLM` will be able to understand. Since the pointers to *gsl_rng_type* and *gsl_rng* are returned by one routine and passed into another, we can just define them as `IDL_PTRINT`. For example, `GSL_RNG_ALLOC` will just return a 64-bit integer that represents a pointer to a *gsl_rng*. This integer can then be passed to `GSL_RNG_UNIFORM`. Our header will be: void gsl_rng_env_setup(); IDL_PTRINT gsl_rng_alloc(IDL_PTRINT t); double gsl_rng_uniform(IDL_PTRINT r); We place these definitions in *mg_gsl_rng_bindings.h* for use when we create the DLM object. #### Create the DLM Next, we use `MG_DLM` to define our DLM containing these bindings. First, create a DLM object with basic metadata: dlm = mg_dlm(basename='mg_gsl', $ prefix='MG_', $ name='mg_gsl', $ description='IDL bindings for GSL', $ version='1.0', $ source='Michael Galloy') Next, tell the DLM where the include directory and library are: dlm->addInclude, 'gsl/gsl_rng.h', $ header_directory='/usr/local/include/gsl' dlm->addLibrary, 'libgsl.a', $ lib_directory='/usr/local/lib', $ /static dlm->addRoutinesFromHeaderFile, 'mg_gsl_rng_bindings.h' For our example, we will also need to access the *gsl_rng_default* variable in the GSL library. In *gsl_rng.h*, it is declared as: GSL_VAR const gsl_rng_type *gsl_rng_default; To add a routine that accesses this variable, we just do: dlm->addVariableAccessor, 'gsl_rng_default', type=14L Finally, tell the DLM to write itself, both the *.c* and *.dlm* files, and then build itself: dlm->write dlm->build We are now ready to use our DLM. #### Example of using We can now use our routines to generate random numbers from IDL: n = 1000L mg_gsl_rng_env_setup t = mg_gsl_rng_default() r = mg_gsl_rng_alloc(t) u = dblarr(n) for i = 0L, n - 1L do u[i] = mg_gsl_rng_uniform(r) Get the code for creating bindings (and a slightly more fleshed out set of GSL bindings) from the [mglib repo]. [mglib repo]: "mgalloy/mglib"

older posts »