Will Collins was a seventh grade student in Mr. Schulman's class. He graduated from Ardrey Kell High School and is now at Arizona State University.
Will Collins was a seventh grade student in Mr. Schulman's class. He graduated from Ardrey Kell High School and is now at Arizona State University.
Will works for the Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) at Arizona State University as a Student Research Aid. The Science Operations Center operates instruments aboard the Lunar Reconnaissance Orbiter (LRO) and the Korea Pathfinder Lunar Orbiter (KPLO)
His primary role is a member of the controlled mosaics team. He takes images that have been collected by LROC and ShadowCam and combine them (called a mosaic) to create large and more useful images. Both instruments are linescan cameras, so the images tend to be very long and skinny. As most regions of interest don't fit inside a single image, he takes neighboring images and combine them together into one product. To do this, he primarily uses the US Geological Survey's Integrated Software for Imagers and Spectrometers (ISIS). More recently he has been working on the creation and development of the ShadowCam controlled mosaics maps as well as preparing for our first ShadowCam controlled mosaics map release.
Outside of standard controlled mosaics, he was also involved in the creation of mosaics of the 13 Artemis III Candidate Landing Sites. The team went through every LROC image of the 13 sites that has ever taken and selected the images that had the most coverage, while also having similar lighting conditions to any neighboring images. After making mosaics with just Narrow Angle Camera images, Will went back and filled in the shadowed regions with ShadowCam images. This is what Will presented about at the Lunar and Planetary Science Conference (LPSC) in March 2024.
THE TOOLS TO MAP THE MOON
LROC has 3 cameras, those being a Wide Angle Camera and two Narrow Angle Cameras. The Wide Angle Camera has 5 bands of color and 2 bands of ultraviolet and images at ~100 meters per pixel. The Narrow Angle Cameras are grayscale and image at ~0.5-2 meters per pixel depending on orbit altitude. LROC was built to image areas of the moon that receive direct light from the sun. Click here for more information about LROC.
ShadowCam was built to image areas that only receive secondary illumination, or light that is bouncing off of something else. This allows it to image permanently shadowed regions at the Moon's poles. These regions tend to be at the bottom of large craters near the poles and in some cases haven't had direct sunlight in billions of years. ShadowCam takes grayscale images that range from 1.6-2.1 meters per pixel. For more info about ShadowCam, click here.
Will Collins at the Lunar Reconnaissance Orbiter Camera Science Operations Center at Arizona State University
Interview with Will Collins
These questions go along with your Social Studies geography objectives, so pay attention!
How did you become interested in geology and geography?
I became interested in geology when I was very young (like 6 or 7) after frequently going to Doc Rocks Gem Mine which is near Blowing Rock, NC. At Doc Rocks you could buy buckets of dirt which contained mining tailings which contained a variety of gemstones (primarily quartz variants, but you could also find ruby, sapphire, emerald, garnet, tourmaline, etc.). The really cool thing for me was that after you had finished sluicing through your bucket, they would take you aside and help you identify what gems you had found and would teach you about how they formed. We went often enough that when I was older, I was able to do much of the identifying myself. I also really enjoyed the outdoors a lot as a kid, and I hiked a lot growing up (I still do). As I grew older, I started getting really into open world video games, such as Minecraft, which combine exploration and creativity. All in all, this led to a continued enjoyment of exploration and the outdoors, with my love of rocks guiding me to want to pursue geology.
When I was looking for where I wanted to go to college, I primarily looked at schools out west, primarily in Arizona and Colorado. I ended up really liking Arizona State University. Going into college I didn't really know what I wanted to do with a geology degree, although I was pretty sure I didn't want to do anything with oil and gas. ASU's geology program is part of their School of Earth and Space Exploration (SESE) and there is a heavy focus on planetary geology (geology on other planets). ASU is involved in like 20 or so NASA missions and there are always opportunities popping up for students to get involved. I saw and applied to one of those opportunities in the summer between my freshman and sophomore year and my application was accepted. The position was a student worker position at the LROC Science Operation Center doing data processing and making controlled mosaics and that is the position that I'm still in to this day.
How do you construct maps from the images you receive from the orbiters?
There are 2 main categories of map products we make and work with are raster products and vector products. Some examples of raster products are the controlled mosaics that I make and Digital Terrain Models. Some examples of vector products are maps of tectonic features, such as lobate scarps and wrinkle ridges, geologic maps, and maps that show where different products exist. Vector products are generally created in GIS (Geographic Information System) software. The Digital Terrain Model creation process takes images of the same area from different angles and use the distortion to determine depth, kind of like how our eyes use two offset images to create a sense of depth. For the controlled mosaics that I make we take images collected under similar conditions, generally on consecutive orbits, and connect them together using shared features in regions of overlap. This is done using the Integrated Software for Imagers and Spectrometers (ISIS). We calibrate the images, then find a bunch of shared features, and then the computer tries to align the images the best it can using those shared features.
How many different ways can you display and use the data you receive from the orbiters?
The images I work with are primarily single band imagery (one color) and is almost always displayed as a raster (each pixel is the same dimensions). LROC Wide Angle Camera (WAC) imagery has 7 different bands (5 visible light and 2 ultraviolet) and you are able to view combinations of specific band to look at different surface properties, such as Titanium abundance. While not every combination of WAC bands may be useful, certain combinations can prove useful, similarly to how you can use different combinations of bands from Landsat imagery of earth. Additionally, we can combine LROC data with that of other instruments to detect things such as new craters. Regolith temperature data from the DIVINER instrument aboard LRO can be used to identify very young craters as they tend to be colder than the surrounding regolith (loose rock). Here is a fresh crater that is ringed in colder regolith, although the crater itself is much warmer than the surrounding ejecta blanket (material ejected from the moon caused by an impact).
How can these maps help NASA problem solve for determining landing sites?
Digital Terrain Models can be used to determine elevation and slope, which is important for landing as you don't want to land on a steep slope. It is also useful to know where the ground is when landing. Controlled mosaics are useful for initial planning and for planning around hazards. Additionally, controlled mosaics could be useful for planning ground operations.
How do you evaluate the orbiters’ data for quality and accuracy?
The number one thing that affects our data's accuracy is probably spacecraft position error. GPS doesn't really work when you're in orbit around the Moon, so we use these things called SPICE Kernels provided by the Navigation and Ancillary Information Facility (NAIF) to determine where spacecraft is at any point in time as well as where our cameras are pointing. These Kernels are further corrected by the Lunar Orbiter Laser Altimeter (LOLA) team (another instrument on LRO) based on the observations of features with known coordinates, such as the retro reflectors at the Apollo landing sites. Even with this, the average positional error for uncontrolled images is approximately 30 meters. For our controlled mosaics, we reduce and analyze positional error by tying the mosaics to pre-exist digital terrain models (DTMs) that we have produced, and other images taken by the orbiter. The digital terrain models generally have good cartographic (map making) accuracy.
As for data quality, the big things that affect our data quality are light levels and spacecraft jitter. If there isn't enough sunlight hitting the terrain being imaged, the images will be full of noise (pixelated) and not super useful. As for spacecraft jitter, it is just vibrations in the spacecraft that can be caused by any number of things (instruments moving, stuff heating up due to sun exposure, stuff cooling down, electronics doing electronic things, etc.). We have a few programs that analyze the images for jitter, but I don't know how they work, I just know how to interpret the outputs. Jitter tends to not be significant enough to affect controlled mosaics, but it does affect DTMs.
How do you turn images into labeled maps?
We primarily use GIS (Geographic Information System) tools to turn our data products into labeled maps. Some Adobe products also get used for further formatting and beautification (primarily Photoshop, Illustrator, and inDesign).
How can the images help identify the physical conditions at the landing sites?
The images can show the locations of hazards such as boulder fields and craters that would want to be avoided when picking a landing site. The images can also shed light on possible interesting locations to investigate during ground operations. Images can be combined together to determine changes in elevation across the landing site which can be used to avoid steep slopes.
How are the landing sites compared to determine which would be best for the Artemis rocket?
A number of things are considered, such as topography, lighting conditions, and proximity to science objectives. One of the big things for the Artemis mission is availability of direct sunlight as the poles get significantly less direct sunlight than the rest of the moon. Here is a link that models the south pole illumination conditions from 2023 to 2030. Additionally, topography is an important concern as you don't want to land in a dense boulder field or on the side of a steep hill. While slopes can be generally determined from LOLA topographic data, that data is too low resolution to make out most boulders (LOLA resolution at the Artemis sites is 5 meter per pixel. Other images can get down to 0.5 meters per pixel and many boulders that could be hazardous can be observed in the images. These images and data from other instruments can be used to determine possible areas of interest for science. The goal for a good landing site is to be somewhere that has good lighting conditions, minimal hazards, and is near science objectives.
What patterns do you see in the physical geography of the moon at a local scale and at a regional scale?
There are a number of different patterns that I and others have observed. One of which is an elephant hide like texture that appears on slopes across the lunar surface (see this abstract by some of my colleagues). We also see ridges that offset craters that have been interpreted as faults (see this article from the Smithsonian). Boulder falls are also observed in areas that have steep slopes as rocks become dislodged due to space weathering or vibrations caused by an impact or a moonquake. There are also some areas that are nothing like anywhere else on the Moon, such as Ina.
Here's a picture of Will presenting his paper (see link below to read it) at the 2024 Lunar and Planetary Science Conference in Houston.