The first thing that many people think when they see an astrophoto is how "unreal" it looks. And the first thing many people say is, "but what does it look like in real life." By this they mean, with the human eye, if you were to be there floating out in space with a nebula, how would it look. Or maybe how would it look with your eye up against the eyepiece of the telescope? There are also others who take the ethereality of astrophotography to a whole different level and label all the images as "fake" or "photoshopped". These questions and accusations will be discussed father down this page.
This page may eventually also go into detail about the more technical aspects of image processing, although those details shouldn't be taken as fact because the author is by no means an expert at image processing and cuts so many corners that it is amazing the edges of the images are even straight.
The Human Eye and Cameras, the Reality of Colors, and Post-Processing
Pt 1 : The Eye and Cameras
The human eye is not very good at seeing things in space. They are just too faint. Even the brightest nebulae will appear mostly as faint grey clouds to the human eye. Even the Milky Way, the galaxy we are inside of, is relatively faint even from dark locations. So it should be no surprise that the answer to what the human eye would really see of what is in an astrophoto is almost nothing. What would that nebula look like if we were inside of it? Well, we are inside the Milky Way, and that definitely isn't easy to see. Basically, the human eye just isn't meant for seeing space. That is why we use cameras.
The main advantage of cameras over the eye is that they are capable of taking in light for a very long period of time. The "shutter speed", per say, of the human eye is much less than a second, whereas a camera can expose for minutes at a time. This extra time allows photons to build up on the sensor so that the image shows much more detail than what the eye can collect in an instant.
So when a camera takes an image and more detail shows up than the eye can see is that detail somehow not real because the eye couldn't see it? No. That would be like walking into a dark stair well and claiming that the stairs don't exist because the lights are off and you can't see them. Obviously, that makes no sense and you will fall down the stairs whether you can see them or not because they are in fact real. Just as the faint dusts of space are real.
Pt 2 : The Reality of Colors
A second and more complex point of contention on the reality of space images has to do with color. Are the vibrant hues in images of nebulae and galaxies "real" or are they more of an artistic representation. Well, we can start with a similar argument to the stair well one: would you walk into a dark forest and claim that all the leaves are black just because it is nighttime and you can't tell they are actually green? Obviously not. The problem of color, however, is a bit more complicated than this because color, in the way that most people think of it, is fairly subjective. Different animals see different colors, an image taken by a phone never looks quite like reality, and in the end color is interpreted by the brain, an imperfect not-machine, anyways. And, as previously mentioned, most things in space are much too faint for the eye to see any color in them at all.
The logical thing to say, then, is "well, if these objects weren't so faint, and we had eyes sensitive enough to see them, what color would they be?" In astrophotography, those would-be hues are termed "true color", a terrible phrase that inevitably leads to many misconceptions. To understand how a true color astro image can be made, it is enough to realize that it happens in the same exact way that any normal camera takes a color image of the world. The camera sensor has red, green, and blue pixels, those pixels receive light in their respective red, green, and blue wavelengths, then a color image is made by combining the red, green, and blue into every other color based on the amount of photons received at each wavelength. True color astrophotography is the same exact thing. Even if the camera is monochrome, images are taken with red, green, and blue filters than later combined to make a color image. This all seems fairly simple then. The colors in astrophotos are just as real as the colors in a normal photo of the world.
Sadly, it gets more complex thanks in part to the terrible term "true color". That term for the combination of red, green, and blue into a fair representation of what the human eye would see leads to a second type of color in astrophotography known as "false color". Whoever came up with the words "false color" clearly wasn't thinking very far ahead to the time when they would have to explain that the colors in their false color image were not, in fact, false at all. False color in astrophotography generally involves collecting data in "colors" that are not red, green, and blue, then combining them in the same way one would combine red, green, and blue to get an image with color in it. This works because rather then collecting all the wavelengths that are approximately red and using that as the red data, you can collect data in a very specific wavelength such such as hydrogen alpha, then call it red in post processing. The reason these are "false color" images is that the data you end up with for the red, green, and blue channels of the final image aren't very good representations of red, green, and blue and therefore aren't really what you would see if you could see colors in space.
This does not mean, however, that the colors in false color images are false in the sense that they are not "real" and are some how added in digitally. The data collected in each color channel is just as real as before, it just isn't necessarily an accurate representation of the true color balance of the target.
Pt 3 : Post Processing
One of the biggest parts of astrophotography, and admittedly the most subjective part, is the processing done to data to turn it into a final image. Initially, all the data is monochrome, very faint, and wont give a good representation of the object. Software programs, like Photoshop and Pixinsight, are used to preform various functions such as stacking and stretching to enhance the final image. Unfortunately, like the "false color" debate, this often leads to confusion over whether the final images are "Photoshopped" in the sense that they are no longer accurate depictions of reality and also to the idea that astrophotography is mostly art. Good astro processing, however, never adds or subtracts anything from the image that wasn't already in the data. That isn't to say there is no artistic element, as previously discussed colors are mostly subjective and exactly how much processing should be done and how it should happen are entirely up to the individual imager.
An example of a processing work flow, or at least some of the basic steps taken between raw data and final image, is show below.
Single autostreched data frame
Autostretch of stack of data
Color combination of channels
Color Subjectivity and Calibration
It can often be difficult to tell exactly what color the final image should be. There is certainly more subjectivity with narrowband, but even with a broad band target like this image of Andromeda, it is hard to know which version is the most "natural."
Starless Images
Although generally heavily editing an image beyond what exists in reality is frowned upon in astrophotography, one interesting thing to do is digitally remove all the stars in the image with software. This emphasizes gas and dust structures that would normally be distracted from by the bright stars in the image. This generally works well on narrowband nebula targets, although the example here is a broad band galaxy
**Section on planetary image processing, which is completely different than deep space processing, is under construction**