Sending SSTV Using a Raspberry Pi

2013-07-31

I had not played with SSTV much since 2009, but got a hankering to mess around after I loaded up the Raspberry Pi I just bought. The handy command-line transmit-only program that comes with QSSTV is not available on ARM Linux, so if I was going to use a Pi as a balloon computer, I was going to have to come up with another solution. I tried my old Perl script attached below, but it was horribly slow. By switching out Image::BMP for the GD library, I was able to reduce runtime by half, but it still took over 100 seconds to convert an image. I decided to take the plunge, and re-write that code in C. The result is sstv6.c, which can convert a 320x256 JPEG or PNG to a WAV file in about 6 seconds.

This code uses the GD lib (libgd1-xpm-dev in Raspbian) to read and decompress the image file. It builds the audio stream in memory in Martin 1 format, then writes it to a WAV file at the end. The magic library (libmagic-dev) is also used here, to semi-automatically determine which image file format to use. It's just a proof-of-concept piece, so don't expect any fancy handling of other files types, or weird pixel dimensions, or alternate output file formats.

In testing, I have been using a newer HD webcam, along with the program fswebcam on Raspbian. The program lets you scale the image dimensions, so that I can generate 320x256x24 images from the command line. It also allows you to define a semi-opaque title bar - this is perfect for inserting a callsign and other info. With the image generated, you can then run sstv6.c to convert it to a WAV file, then use aplay to send the audio to the radio. PTT can be handled with a GPIO pin, and bam - there you have a Pi SSTV station.

In that spirit, I used fswebcam to take a picture of my screen, where I was viewing Google Maps satellite images. I like to imagine that this is a picture taken by the downward-facing webcam on my payload, showing me where it's about to land. The images below show the original file, and the result of converting it to audio with my program, then back to a picture using the file input of QSSTV (all on the Pi). The image is slightly fuzzier after the double conversion, as you would expect, but still perfectly legible. And just for fun, I tried processing the picture of my son on a boat again, this time passing from the Pi through an audio cable to my laptop. The two did not agree too well on timing, but with auto-sync set in MMSSTV on the laptop, the image came through super clearly, with accurate color. Again, couldn't be happier!

Although....

I wonder if I could back-port this to an AVR? Maybe use an SD card shield with FAT support... maybe skip the WAV format completely, and just write raw PCM values to a file, then replay that later by reading the values back in and driving an ATtiny2313 oscillator, like in the previous experiments. Hmm....

Original image

Google Maps aerial view of my neighborhood

Image-to-sstv6-to-WAV-to-QSSTV-to-image

Image converted to audio, and back to an image

2014-07-02

Fiddled with this some more tonight. After a lot of research, I managed to figure out the timings and VIS codes for the SSTV modes with the fastest scan times, especially Robot 36. I had originally intended to implement just the black-and-white version of Robot 36, expecting it to yield the simplest and lightest-weight solution to fast scans. It turns out that the 36 second B&W mode is very different from the 36 second color mode. Worse, the B&W modes are not well documented at all, so I am not sure how to force B&W over color. The scan sequence suggests that the color scans could be simply omitted, but that only caused chaos on the Rx side using MSSTV. I could probably figure it out through experiementation, though I am not sure MSSTV even supports the B&W-only Robot modes.

At any rate, attached below are:

sstv7.c - - This program uses Robot 36 color mode, but uses a neutral value for all R-Y and B-Y scans. This gives B&W output on the Rx side, but only under near-perfect conditions.

sstv8.c - - This program uses Robot 36 color mode, including the screwball color scan lines. The colorspace calculations and timings are straight out of J. L. Barber's "Dayton Paper" (also attached), which remains the best reference I have found for the technical details of each mode.

Initial testing from my Pi was encouraging - the code changes were easy, and it compiled on the second try (after a typo fix). Feeding the output file into MSSTV within my laptop (in the digital domain only), the image quality is as good as the results from sstv6, above. The results over-the-air were a little less exciting. Using a cobbled-together interface (complete with gator-clip jumpers) from the Pi's sound card to an HT, with levels set by ear, there was so much noise on the channel that I decoded more snow than image just two rooms away. Hopefully with better isolation, shielding, shorter cables, some ferrites, more attention to level-setting, and so on, I'll see better results.

The real value in these two updates is the working implementation of a 36-second, 320x240, color mode on the Pi. As with version 6, this code is super-portable, and does not care one bit about what camera, sound, or system hardware you're running on. Any recent Linux platform with the GD and Magic libs should be able to compile and run this with no trouble.


Attached Files

Link