Updated August 2019 (added link to code)
Well, this has been quite a project, more than I thought when I stuck my hand up to say "sure, I will do it!" Mike Noble captured the fantastic geomagnetic storm of May 7-8, 2016 on 3 cameras. I knew I could use the Hugin GUI to stitch 3 images together into 1 panorama, but what was required was to stitch 3000 such triplets into fisheye views that could then be used to generate a time-lapse movie. I was fairly sure it was possible to use Hugin's command-line tools in a loop (after all, this is what computers are good at!) to generate each frame. End goal, project onto our planetarium's 23m dome in 10K!
https://www.youtube.com/watch?v=EQlogcyTfGM (teaser)In principle, quite straightforward:
1. Pull 3 images into a directory, rename them Cam0, Cam1, Cam2
2. Use Hugin GUI to orient and link common (control) points, set output projection and size, click on optimize, and save [this creates a .pto file]
Tutorial: Use Hugin to create overhead fisheye aurora from 3 separate cameras
3. Have a text file that lists which images are in a set
4. Use a script to loop through the entire event:
4a) pull in the next trio of images (based on the text file), and update the sequence number
4b) execute nona: "E:\\bin\\nona -m TIFF_m -o Zenith ZenithProc.pto Cam0.tif Cam1.tif Cam2.tif"
4c) blend: "E:\\bin\\enblend -o Zenith.tif Zenith*.tif"
4d) Move and rename Zenith.tif to the final folder with the proper sequence number
4d) repeat
5. Use a video-editor to join the frames into a movie
Thanks to Thomas Modes and Andrew hazelden for sketching out this workflow, and correcting me about the command-line optimizer. Sometimes using Hugin is not intuitive to the relative newcomer... on my first run through, I was getting double stars, and mistakenly thought I had to run the optimizer for each frame. No! You just have to remember to click on "Optimize" in the GUI, so that nona can use the parameters in the .pto file. ONCE.
Nice project for a weekend, isn't it? Murphy's Law (if something can go wrong, it will) applies many times.
Problem 1: Camera clock errors
Which image from camera 1 goes with the one from camera 2 and 3? Easy, just use the EXIF time stamp. Except they were never synchronized. Even if they were, the clock times drift at different rates! Mike's 3 cameras run 1.1, 1.3 and -1.5 seconds per day in error. Per day; hard to imagine the possibility when my digital watch of the late 70s was a second or two out per year. So if you last set the clocks 2 months before an event, the same instant can be 3 minutes apart. I now recommend resetting all camera clocks once a month, and keeping a record. That way if you forget, it is simple to use the known drift rate to recalculate the true UTC of each image to the nearest second.
Solution: find a nice active part of the auroral display with sharp features, then step through each one until you get a match. You now have relative time offsets. Set one as the "master" and now you find the matching images anywhere in the sequence.
Problem 2: Gaps in image sequence due to battery changes
You can't use the image number as an index for the loop because whenever the camera needs a battery replacement, you might fall behind 3 frames (15-18 seconds). For an active aurora, that's a lot of movement.
Solution: Mike originally thought this was a one time affair, so he used an Excel spreadsheet to populate columns, then manually adjust them. Then we had another lovely geomagnetic storm and a second panorama stitch project, so I felt it was time to write a script (seq_aur_images.pl, attached below) to automate a process that was not just tedious, but prone to human error. Given the image directories and time offsets, the script then reads the EXIF times, converts to epoch, and then loops time with a constant increments of the same interval as the images, finds the closest match, and spits out a line that is later used in the stitching loop. In the final time-lapse, the aurora is "frozen" in the sector where there was a battery replacement - better than leaving a blank sector.
Problem 3: Shifting Camera Direction
When Mike started out shooting the big aurora displays, he was never sure whether he could catch it on one camera, two, or three, or be using using one for time-lapse and another for close ups, or to have the aurora show up in the south instead of the east. So he would start the camera in one direction, then adjust it: Click to see the animated-GIF. Start up another camera but then notice that the overlap is not good, so adjust it.
Now of course, these changes mean that within the Hugin GUI I have to manually rotate only the image that changed. I can do this with move/drag, and I get pretty close. Perhaps there is a way to be more exact, but I don't know the interface well enough. BEFORE moving the image, it is important to remove the control points with that image (the other ones between the other cameras can remain).
Once adjusted for rotation, I then have to create new control points with each of the other two directions. Then click on optimize, and save to a new .pto.
Perhaps because the lens parameters were not completely handled within Hugin, the re-optimization caused some small shifts in the size of the rendered fisheye and positions of distant lights on the horizon. After trying a lot, I could not solve this exactly. Truth be told, there were a few times when Mike adjusted the camera 10 frames in a row, and I simply did not have the patience to adjust each one (sometimes takes 45 minutes) , so I made one adjustment and "froze" the aurora for the in-between frames. Anyway, I found that that the horizon shifts in the final video are noticeable enough to be annoying, more annoying than a brief freeze.
The same process must occur if the focal length (zoom) was changed from one image to the next.
More to come on this story....