This post describes how one creates an astro-image like the one below. This will be a bit of a dry post. Let’s start start with the finished product, an image I took of Messier 81 in the constellation of Ursa Major, also known as Bode’s Galaxy.
I use the term “image” intentionally to contrast with “photo.” These images are the result of capturing data on a CCD through a telescope in multiple long exposures. They are not photos. These data need to be processed to become the pretty images we see. The image of M81 above used 3 hours 25 minutes of exposure time, taken in multiple 5-minute exposures with white light, red, green, and blue taken separately and processed as described below to create the final image.
The data are noisy. There are anomalies in the optical system like dust or uneven illumination though the telescope. Heat causes random charges to accumulate on a CCD during long exposures, even with the CCD chilled to -25°C. The CCD chip itself may have minor defects that generate differences in how photons are collected. The electronics introduce noise when data are read off the CCD and passed to the computer controlling the camera. Finally, the objects being imaged are very dim, so the signal we are trying to capture is small, just barely above the background glow of the sky.
Multiple statistical and mathematical techniques are used to remove the noise and enhance the signal relative to the noise. There are many programs that provide the tools to process astronomical images and apply the techniques I will discuss below. I use PixInsight. Other software packages in this area include Maxim DL, CCDStack, and many others. These programs do all the mathematical heavy lifting.
The most powerful technique is very simple. Collect as much data as you can. That means as much exposure time as possible. I take 5-minute exposures and combine them to get the effect of a longer single exposure. More on the multiple exposures later, let’s focus on what is done to one sub-exposure, also called a sub-frame.
An individual sub-frame taken straight from the camera looks like this:
Not much to look at, is it? Almost all the data are in a very small range at the low range of values. The histogram of the image looks like this:
Stretching the display of the image by moving the black point and midpoint reveals the galaxy we are imaging. You can see the galaxy, but also many specks across the image. (Click on any image to bring up a larger version.) That’s noise. The spike below the bright star on the right is a “bloom” and it results when the pixels on the CCD get full of charge and leak into adjacent pixels.
Calibration or reduction of the data reduces noise in the images we have taken. To calibrate or reduce the data in each sub-frame, we use three different kinds of calibration frames: Dark, Bias, and Flat.
A Dark frame is an image taken with the shutter closed for the same amount of time and at the same temperature as your sub-frame exposure. A Dark frame captures the thermal noise. To get a good Dark frame, I take many (45+) individual Darks and average them together to create a Master Dark representing the average thermal noise. This is subtracted from each sub-frame. A Master Dark looks like this:
A Bias frame is taken with zero exposure time and measures read noise, the noise introduced as the CCD data is downloaded. This is also subtracted from the sub-frame. A Master Bias (~150 frames) looks like this:
A Flat frame is an exposure against a flat light source like an illuminated wall or the dawn sky. It is used to eliminate aberrations in the optical train like vignetting or dust. Mathematically, the values in the Flat are normalized around the mean (each pixel value / average pixel value) and the sub-frame pixel is divided by that value. Dark areas are lightened and light areas darkened. You can see slight vignetting and the ring of a piece of dust in this Master Flat:
So now we have a set of sub-frames with their core noise removed. The final techniques for noise reduction involve how the sub-frames are taken and combined. When taking my 5-minute sub-frames, the software managing the imaging moves the position of the telescope by several pixels between each sub-frame. This is called dithering. After calibrating the images, the software uses the stars to align each sub-frame around a chosen base frame. This means that the image data is all aligned but the chip based noise is spread around. Picture a stack of images like this, with the images aligned on the stars rather than the edges.
The next step is combination. One takes the aligned sub-frames, 20 white light or luminance ones in this example, and calculates the mean value for each pixel in the stack. Before calculating that mean, any pixel that is outside of 1.5 standard deviations of the mean is removed from the calculation. Since noise tends to be outliers, this eliminates more of the noise. This is called value rejection and there are many algorithms to do this available in PixInsight.
After this calibrate -> align -> combine process you end up with the final image for each color. The edges need to be trimmed because the star alignment leaves them ragged. Here is my final white light or luminance image, fully processed and ready for combination with red, green, and blue to create the final image at the top of the post.
If you have made it this far, I hope you found the post interesting. I apologize for any inaccuracies I have introduced by oversimplifying to explain this process simply and succinctly.