The IDL Usenet newsgroup has moved to a Google Group:

This Google Group is a continuation of the Usenet group comp.lang.idl-pvwave, but allows for better spam filtering. It is for discussion of the Interactive Data Language (IDL), developed by Harris Geospatial Corporation. Questions about ENVI, a geospatial analytics software written in IDL are welcome. Discussion of the similar PV-WAVE language is also allowed.

The big question now is how to save the posts from the old newsgroup.

[iTerm2] is a macOS terminal emulator with a lot of extra features. In particular, it has a simple protocol for displaying images inline. It comes with a program `imgcat` that will display common image formats such as PNG, JPEG, GIF, etc. Most of the images I deal with are FITS, though. I wrote `fitscat` to be a handy utility to display FITS images, as well as to print basic information about the FITS file such as a listing of extensions or an extension header.

For example, `fitscat` can display an image in an extension, as seen below:

There are options to specify a minimum and maximum value for scaling, as well as to use a simple filter such square root.

Also, `fitscat` can also print basic information about a FITS file, such as a listing of extensions:

CoMP$ fitscat –list 20150624.170419.comp.1074.iqu.5.fts
Filename: 20150624.170419.comp.1074.iqu.5.fts
No. Name Ver Type Cards Dimensions Format
0 PRIMARY 1 PrimaryHDU 67 ()
1 I, 1074.38 1 ImageHDU 33 (620, 620) float32
2 I, 1074.50 1 ImageHDU 33 (620, 620) float32
3 I, 1074.62 1 ImageHDU 33 (620, 620) float32
4 I, 1074.74 1 ImageHDU 33 (620, 620) float32
5 I, 1074.86 1 ImageHDU 33 (620, 620) float32
6 Q, 1074.38 1 ImageHDU 33 (620, 620) float32
7 Q, 1074.50 1 ImageHDU 33 (620, 620) float32
8 Q, 1074.62 1 ImageHDU 33 (620, 620) float32
9 Q, 1074.74 1 ImageHDU 33 (620, 620) float32
10 Q, 1074.86 1 ImageHDU 33 (620, 620) float32
11 U, 1074.38 1 ImageHDU 33 (620, 620) float32
12 U, 1074.50 1 ImageHDU 33 (620, 620) float32
13 U, 1074.62 1 ImageHDU 33 (620, 620) float32
14 U, 1074.74 1 ImageHDU 33 (620, 620) float32
15 U, 1074.86 1 ImageHDU 33 (620, 620) float32

Or display a header:

CoMP$ fitscat –header -e 3 20150624.170419.comp.1074.iqu.5.fts
XTENSION= ‘IMAGE ‘ /extension type
BITPIX = -32 /bits per data value
NAXIS = 2 /number of axes
NAXIS1 = 620 /
NAXIS2 = 620 /
PCOUNT = 0 /
GCOUNT = 1 /
EXTNAME = ‘I, 1074.62’ /
WAVELENG= 1074.620 / WAVELENGTH OF OBS (NM)
POLSTATE= ‘I ‘ / POLARIZATION STATE
EXPOSURE= 250.00 / EXPOSURE TIME (MILLISEC)
NAVERAGE= 16 / Number of images averaged together
FILTER = 1 / FILTER WHEEL POSITION (1-8)
DATATYPE= ‘DATA’ / DATA, DARK OR FLAT
LCVR1TMP= 29.639999 / DEGREES CELSIUS
LCVR2TMP= 33.429001 /
LCVR3TMP= 33.715000 /
LCVR4TMP= 33.738998 /
LCVR5TMP= 33.618999 /
LCVR6TMP= 28.847000 /
NDFILTER= 8 / ND 1=.1, 2=.3, 3=.5, 4=1, 5=2, 6=3, 7=4, 8=cle
BACKGRND= 13.154 / Median of masked line center background
BODYTEMP= 34.023 / TEMPERATURE OF FILTER BODY (C)
BASETEMP= 33.599 / BASE PLATE TEMP (C)
RACKTEMP= 25.012 / COMPUTER RACK AMBIENT AIR TEMP (C)
OPTRTEMP= 33.306 / OPTICAL RAIL TEMP (C)
DEMULT = 1 / 1=DEMULTIPLEXED, 0=NOT DEMULTIPLEXED
FILTTEMP= 35.000 / ILX FILTER TEMPERATURE (C)
FLATFILE= ‘20150624.070023.FTS’ / Name of flat field file
INHERIT = T /
DISPMIN = 0.00 / Minimum data value
DISPMAX = 5.00 / Maximum data value
DISPEXP = 0.50 / Exponent value for scaling

The full interface of `fitscat` is shown below:

$ fitscat –help
usage: fitscat [-h] [–min MIN] [–max MAX] [–debug] [-d] [-l] [-r]
[-e EXTEN_NO] [-f FILTER] [-s SLICE]
filename

fitscat – a FITS query/display program

positional arguments:
filename FITS file to query

optional arguments:
-h, –help show this help message and exit
–min MIN min for scaling
–max MAX max for scaling
–debug set to debug
-d, –display set to display
-l, –list set to list HDUs
-r, –header set to display header
-e EXTEN_NO, –exten_no EXTEN_NO
specify extension
-f FILTER, –filter FILTER
specify filter (default: none)
-s SLICE, –slice SLICE
specify slice of data array to display

Source code for the Python script is available on [GitHub]. The script is compatible with Python 2 and 3, but requires standard scientific Python packages AstroPy, NumPy, and PIL.

[iTerm2]: https://iterm2.com “iTerm2”
[GitHub]: https://github.com/mgalloy/scripts/blob/master/fitscat “scripts/fitscat”

NIST has published new standards for digital identifies. Highlights, via [Bruce Schneier], for passwords:

1. No password rules! Use pass phrases.
2. Don’t expire passwords.
3. Allow password managers.

I have written about this [before], where I said my personal pet peeve was forced password expiration (#2). I hope organizations start using the new standards quickly!

[before]: http://michaelgalloy.com/2017/03/16/i-hate-password-rules.html “I hate password rules”
[Bruce Schneier]: https://www.schneier.com/blog/archives/2017/10/changes_in_pass.html “Changes in Password Best Practices”

I presented a [poster] at a [Space Weather workshop] at the Lorentz Center in Leiden, Netherlands last week:

> **Real-time automated detection of coronal mass ejections using ground-based coronagraph instruments**
>
> Coronal mass ejections (CMEs) are dynamic events that eject magnetized plasma from the Sun’s corona into interplanetary space. CMEs are a major driver of solar energetic particle (SEP) events and geomagnetic storms. SEP events and geomagnetic storms pose hazards to astronauts, satellites, communication systems, and power grids. Understanding CME formation and predicting their impacts at Earth are primary goals of the National Space Weather program. St. Cyr et al. (2017) reported on the use of near real-time white light observations of the low corona from the COSMO K-Coronagraph (K- Cor) to provide an early warning of possible SEP events driven by fast CMEs. Following that work, one of us (Thompson) created a new CME detection algorithm adapted from the Solar Eruptive Event Detection System (SEEDS) code for use with K-Cor observations from the Mauna Loa Solar Observatory (MLSO) in Hawaii. We develop performance metrics and report on the success of the algorithm to detect CMEs in the 2017 K-Cor observations. Measures of success include the ability of the algorithm to detect an event and the amount of time between the event onset and its detection. The algorithm successfully detected 20 of the 35 CMEs identified between 1 Jan and 31 August, 2017 in the K-Cor data. There were 10 false positive events during this time period. The threshold for CME detection is discussed as a function of CME visibility, instrument background, and sky noise. The code has been modified to run in an automated mode and is in the process of being integrated into the real-time data processing pipeline at Mauna Loa. We report on current status, real-time alerts, and future upgrades.

[Space Weather workshop]: http://lorentzcenter.nl/lc/web/2017/921/info.php3?wsid=921&venue=Oort “Lorentz Center – Space Weather: A Multi-Disciplinary Approach ”

[poster]: http://michaelgalloy.com/wp-content/uploads/2017/10/mgalloy-lorentz-workshop-2017.pdf “Real-time automated detection of coronal mass ejections using ground-based coronagraph instruments”

Here’s a [tutorial] of how to make an animation of the moon’s shadow with GOES imagery during The Great American Eclipse of 2017:

> Here is one of the coolest examples that I have created using IDL in a while. For this blog post, I’m going to walk through how I created an animation of the Moon’s shadow during the Great American Total Solar Eclipse using several different technologies for accessing, downloading, and visualizing the data.

The is on Harris Geospatial Solutions’ Facebook page.

[tutorial]: http://www.harrisgeospatial.com/Learn/Blogs/Blog-Details/TabId/2716/ArtMID/10198/ArticleID/21275/The-Eclipse-Tracking-where-the-Moons-shadow-GOES.aspx?utm_source=twitter&utm_medium=social&utm_campaign=blog “The Eclipse: Tracking where the Moon’s shadow GOES”
: https://www.facebook.com/HarrisGeospatialSolutions/videos/10155456330801006/ “Our IDL 8.6.1 created an animation of the 2017 solar eclipse”

The [dataviz.tools] site is an annotated and categorized catalog of good visualization tools.

> This site features a curated selection of data visualization tools meant to bridge the gap between programmers/statisticians and the general public by only highlighting free/freemium, responsive and relatively simple-to-learn technologies for displaying both basic and complex, multivariate datasets.

via [FlowingData]

[dataviz.tools]: http://dataviz.tools “a curated guide to the best tools, resources and technologies for data visualization”
[FlowingData]: http://flowingdata.com/2017/01/20/catalog-of-visualization-tools/ “Catalog of visualization tools”

Some great tips for [spotting misleading visualizations]:

> By using dual axes, the magnitude can shrink or expand for each metric. This is typically done to imply correlation and causation. “Because of this, this other thing happened. See, it’s clear.”

There are some great links as examples of these problems, like the [spurious correlations project] by Tyler Vigen to automatically find correlations.

[spotting misleading visualizations]: https://flowingdata.com/2017/02/09/how-to-spot-visualization-lies/ “How to Spot Visualization Lies”

[spurious correlations project]: http://flowingdata.com/2014/05/12/random-things-that-correlate/ “”

I think “machine learning” in [this paper] applies fairly well to any type of scientific pipeline code:

> Using the framework of *technical debt*, we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning.

The authors argue that machine learning systems have the regular issues of a code, but also have other complexities that are not necessary addressed in the normal way of refactoring libraries, adding unit tests, etc.

[this paper]: https://research.google.com/pubs/pub43146.html “Machine Learning: The High-Interest Credit Card of Technical Debt”

IDL 8.6.1 was released today[^1]. Some interesting new features:

* Conditional breakpoints from the Workbench
* Hexadecimal constants, e.g., `a = 0xFF3A`
* Fix for strings that begin with numerals being confused with the octal notation: `”123` is an octal value; `”123″` used to be a syntax error, but is now a valid string.

See the [release notes] for details.

[release notes]: http://harrisgeospatial.com/Support/Maintenance/TabId/2350/ArtMID/10427/ArticleID/21253/Whats-New-in-IDL-861.aspx “What’s New in IDL 8.6.1”

[^1]: Really sometime in the last week or so. The announcement on the newsgroup was today, but the release notes was posted 7/27.

Travis Oliphant, creator of NumPy the array package for Python, wrote a [analog] to the [Zen of Python] for NumPy:

>Strided is better than scattered
>Contiguous is better than strided
>Descriptive is better than imperative (use data-types)
>Array-oriented is often better than object-oriented
>Broadcasting is a great idea — use where possible
>Vectorized is better than an explicit loop
>Unless it’s complicated — then use numexpr, weave, or Cython
>Think in higher dimensions

I tried [something for IDL] last year.

[analog]: http://technicaldiscovery.blogspot.com/2010/11/zen-of-numpy.html?m=1 “Zen of NumPy”
[Zen of Python]: https://www.python.org/dev/peps/pep-0020/ “PEP 20”
[something for IDL]: http://michaelgalloy.com/2016/03/22/philosophy-of-idl.html “Philosophy of IDL”

« newer postsolder posts »