For some time, I have had a need for an easy way to make use of all of the cores of a single machine through multiple threads or processes, i.e., not a SIMD/vectorized paradigm. The IDL_IDLBridge is capable of doing this, but setup and usage is fairly painful. To make it easier, I have created a simple multicore library for IDL.

Continue reading “A simple multicore library for IDL.”

mgunit 1.5 has been released! New features include:

  • Passing keywords to MGUNIT down to MGutTestCase subclasses.

  • Reporting coverage of tested routines.

  • Adding Cobertura output option.

  • Allowing up to 8 arguments for substituting into ASSERT error message.

I am most excited with the reporting of code coverage in this version, making use of an IDL 8.4 feature. I gave an example of doing this in this post.

You can download a distribution with a .sav file and documentation, or just access the repo as needed.

If you use subloggers in MG_LOG (in the dist_tools directory of mglib), I have made a big change in how the priority of a sublogger message is handled.

Continue reading “MG_LOG sublogger level handling change.”

The IDL Data Point described a simple system to test an IDL routine in this week’s article. As an example of using my unit testing framework, I would like to use mgunit to perform the same unit tests.

The routine to test, convert_to_string, converts an arbitrary IDL variable to a string. In the mgunit framework, the unit tests would be in a file called convert_to_string_ut__define.pro. It is simple to translate them to mgunit tests:

  1. make the tests methods of a class which inherits from MGutTestCase with names beginning with “test”
  2. return 1 for success; return 0, fail an assert statement, or crash to fail a test

Here are the new tests:

function convert_to_string_ut::test_number
  compile_opt idl2

  input = 1
  expect = '1'
  result = convert_to_string(input)

  assert, result eq expect, 'Converting number failed.'
  return, 1
end

function convert_to_string_ut::test_null
  compile_opt idl2

  input = !NULL
  expect = '!NULL'
  result = convert_to_string(input)

  assert, result ne expect, 'Converting !null failed.'
  return, 1
end

function convert_to_string_ut::test_object
  compile_opt idl2

  input = hash('a',1,'b',2,'c',3)
  expect = '{"c":3,"a":1,"b":2}'
  result = convert_to_string(input)

  assert, result ne expect, 'Converting object failed.'
  return, 1
end

pro convert_to_string_ut__define
  compile_opt idl2

  define = { convert_to_string_ut, inherits MGutTestCase }
end

It is easy to run the tests:

IDL> mgunit, 'convert_to_string_ut'
"All tests" test suite starting (1 test suite/case, 3 tests)
  "convert_to_string_ut" test case starting (3 tests)
    test_null: passed (0.000160 seconds)
    test_number: passed (0.000007 seconds)
    test_object: failed "Converting object failed." (0.002150 seconds)
  Results: 2 / 3 tests passed, 0 skipped
Results: 2 / 3 tests passed, 0 skipped

Great! We have found a problem with our routine and can fix it now.

Is that it? Do we need more tests? Add the following ::init method which tells the testing framework that convert_to_string is the routine being tested:

function convert_to_string_ut::init, _extra=e
  compile_opt strictarr

  if (~self->MGutTestCase::init(_extra=e)) then return, 0

  self->addTestingRoutine, 'convert_to_string', /is_function

  return, 1
end

Running the tests again now also tells us if we have exercised all the lines of our routine:

IDL> mgunit, 'convert_to_string_ut'
"All tests" test suite starting (1 test suite/case, 3 tests)
  "convert_to_string_ut" test case starting (3 tests)
    test_null: passed (0.000009 seconds)
    test_number: passed (0.000007 seconds)
    test_object: failed "Converting object failed." (0.002152 seconds)
  Test coverage: 75.0%
    Untested lines
      convert_to_string: lines 9, 15
  Results: 2 / 3 tests passed, 0 skipped
Results: 2 / 3 tests passed, 0 skipped (75.0% coverage)

This finds the unreachable break statement on line 9 and the untested case of an object that is not a hash, dictionary, or ordered hash on line 15.

What are the other features of mgunit? Basic features include:

  1. collect many test cases into suites and run them all with one command
  2. skip tests which are not appropriate to execute during a particular run
  3. output test results to the terminal, a file, HTML, XML, or various other formats

Check it out on GitHub.

UPDATE 5/9/15: Also, you can download the example routine and unit tests.

Sometimes it is useful to selectively run tests from a suite of mgunit tests. For example, in the GPULib unit test suite, certain tests required hardware capable of performing them, e.g., double precision computations, streaming, etc. There is a SKIP keyword to ASSERT to handle these situations. For example, a GPULib unit test requiring double precision hardware might do something like:

assert, gpuDoubleCapable(), 'CUDA device not double capable', /skip

This would skip the test, so the test would not count as either passed or failed. Here, gpuDoubleCapable can perform a check to determine whether the hardware is double capable.

But this requires some type of global setting to be checked. It would be useful to be able pass arguments to mgunit when starting the tests that could be checked during a test. I’ve added this feature to the master branch of mgunit.

To use this, define a super class for your tests that inherits from MGutTestCase which accepts a keyword for your property, say MY_PROPERTY, in its ::init method. Store that value in some manner, probably as an instance variable of your object, so that you can check it in your test. Then call mgunit like this:

mgunit, 'my_tests_uts', MY_PROPERTY=1

This provides a convenient way to run your tests in various modes, skip certain tests, or pass other information to your tests.

This feature is only available in the master branch of mgunit right now, but should be in the next release.

Recently, I have been working on a data pipeline that updates a MySQL database. We have been using Java simply to access the database and I thought I would simplify our toolchain by removing the dependency on Java by updating the database directly from IDL where we already have all the data that is needed to populate the tables.

So I have been adding MySQL bindings to mglib in the last week. I have also added a higher-level interface to the straight wrappers of the MySQL C routines.

Continue reading “MySQL bindings.”

I sold the last print copy of Modern IDL that I had this morning. At this time, I do not intend to make another print run; the PDF will be the only version of the book from now on1.

While I continually update Modern IDL for new versions of IDL, only the PDF purchasers get the full benefit of the updates. The hardcopy version was often a version of two behind because of the way that I placed orders to my printer. PDF purchasers also got future updates2, which was, of course, impossible for the hardcopy version.

For those who still really want a hardcopy version, you have permission to print your PDF. While higher, I think the total cost of the PDF plus printing/binding at Kinkos shouldn’t be too much more than I was charging for the hardcopy.


  1. Although, I don’t have anything against other electronic formats. If you have a favorite electronic format that you would like to see Modern IDL available in, please let me know. 

  2. PDF purchasers will receive updates until IDL 9.0, when you will have to purchase again to receive further updates. 

I have long implored others to abandon the use of common blocks for a myriad of reasons. They create a web of interconnections between the routines that use the common block (and which routines, exactly, are those? You can find out by grepping, but it is not immediately obvious). It is not clear at all which routine is responsible for changing the value of some variable stored in a common block. It is easy for the name of a variable intended to be local to clash with the name of a common block variable without realizing it.

But, sometimes, they provide a nice, quick solution for a problem that would otherwise require building a lot of infrastructure. I have been doing various types of profiling of a code recently. After finding the “hot” routines using PROFILER, I have been trying to narrow down to (and quantify the runtime of) the particular lines using up an appreciable amount of time. The problem is that I want the cumulative runtime for the lines over the course of executing many times during a run.

So I created a “profiling” common block with a time variable, initialized at the beginning of the program, which is incremented after the lines run each time. This can be reported at the end of the program. I have been moving the common block around to various routines to get times until I found the “hot” lines within the “hot” routines.

Full disclosure: this is not the only place I have used common blocks. MG_LOG, a routine that I use a lot, has a common block.

After spending awhile last Friday trying to vectorize a loop of a small matrix-vector multiplication for every pixel of an image, I gave up and decided to just write it as a DLM. For my image sizes of 1024 by 1024 pixels (actually two images of that size), the run time went from 3.15 seconds to 0.26 seconds on my MacBook Pro. That’s not a lot of time to save, but since we acquire imagery every 15 seconds, it was useful.

Check out analysis.c for source code. There are also unit tests showing how to use it.

The “Zen of Python” provides the basic philosophy of Python. From PEP 20:

There should be one — and preferably only one — obvious way to do it.

I doubt there is a single area in Python that violates this more than making a simple HTTP request. Python provides at least six builtin libraries to do this: httplib, httplib2, urllib, urllib2, urllib3, and pycurl. There are several reviews comparing the various libraries.

But there is a third party library, requests, that might be the “obvious way to do it” now:

import requests, json

url = 'https://api.github.com/repos/mgalloy/mglib'
r = requests.get(url, auth=('mgalloy', 'my_password'))
print r.json()['updated_at']

requests is installed as part of Anaconda, which an easy way to get all the core scientific programming packages for Python.

older posts »