Great Global Warming Quote

I don’t often get controversial here, but I had to post this one.

More Art Than Science
So a couple of weeks ago we were in New Orleans, on the precise anniversary of Hurricane Katrina’s landfall two years ago. And the weather wasn’t bad. What happened? Isn’t it hurricane season? And weren’t hurricanes supposed to get even worse courtesy of “global warming”? It didn’t quite work out that way, as Bloomberg reports:

Hurricane researchers, who forecast seven more storms this season, have flubbed the past two annual estimates because of unusual El Nino and La Nina weather phenomena in the Atlantic and Pacific Oceans.

The predictions reflect variables that make this kind of weather forecasting “more art than science,” said Eric Blake, a hurricane specialist at the National Hurricane Center in Miami. Two of the nine Atlantic hurricanes predicted already have occurred for the season that ends Nov 30. Last year, five storms emerged after nine were anticipated.

Remember that: Weather forecasting is “more art than science.” Except of course when the forecasters want to dismantle our entire industrial economy. Then it’s settled science that no one may even question.

The last paragraph states my entire problem with the Human Caused Global Warming Crowd. They throw away science for a passionate belief in what they want to be true.

Perseid Meteor Shower

It has been almost a month and I am finally getting around to writing up the Perseid experience, even as a quickly wrote up Pre-Perseid the day after. This post documents my August 12, 2007 Perseid experience.

My wife and daughters headed home to LA mid-afternoon, as a friend of the family was arriving the next day, too early for a return trip after staying up most of the night as I was planning. I set up out on the patio by the master bedroom. I had a lawn chair, my 12×90 binoculars, a table, and my C-8 (still without drive motors, back to the dark ages!). I also set up my FM2 on a tripod. I had some ISO 160 and ISO 400 professional color print film.

Many people make meteor observing a science, and set themselves up to get many photos, and record accurate meteor counts, etc. That was not my intent. I wanted to see as many meteors as I could, do some visual observing so I wouldn’t go nuts by myself out there, and see if I could get some interesting pictures. I think I succeeded on all accounts. But I did not get a super picture or any real scientific data.

I had everything set-up at about 8:50 pm (all times PDT). I spent the first hour focused on observing, going after globular clusters and double stars. I was using my trusty Celestron guide to the sky. I saw M80, M10, M12, and NGC 6293. I picked out at least 6 double star pairs. This was mixed with meteor watching. There were several bright grazers with long tails during the first hour of observing.

Moving into the 11:00 to midnight hour, the pace of meteors picked up. During that hour, I was fairly dedicated to meteor watching and recording, and saw a Perseid about once every other minute. They would come in bunches and a few non-Perseids were mixed in with the Perseids. I distinguished the non-Perseids based on their direction. The meteor shower is named after constellation that is the are of the apparent source of the meteors. The Perseids come from Perseus, in the northern sky this time of year, just below Cassiopeia. Any meteor that did not move generally from north to south I considered not a Perseid.

There were many bright trails during this hour. Things slowed down around midnight. I took a look at M31, the Andromeda galaxy, and then looked at the Double Cluster in Perseus. It was fantastic, a double clump of stars. Very impressive. The midnight to 1:00am hour was much slower. Only about 20 in the hour — one every several minutes, with this count probably being low because I stepped away several times. At 1:00am I put away the telescope.

After 1:00am the meteors seemed to come in clumps. I’d go several minutes and see none, then get a bunch. I recorded about 45 meteors between 1:00am and 2:30. I went in at 2:30. I know that the more serious observers out there will properly tell me that it was just getting started, but I was too tired to keep looking. I had accomplished my observing goals.

And then there was the camera. I had been trying all sorts of different exposures. I was shooting with a 35mm f1.4 lens. My exposures were a mix of whatever came to mind and when I realized that I had left the shutter open. I also did not look carefully at the f-stop and took a number of shots with the aperture shut down, which makes no sense when you want more light. I caught many planes. There is a major jet pathway that goes more-or-less over Hemet, which is north of us. So from our perspective, they are going over Cahuilla peak. Right where the Perseids would be. I did catch one Perseid.

Perseid over Anza

The meteor is in the upper right of the image. You can clearly see Cassiopeia and make out the Andromeda galaxy in the lower center right. I also got a very nice star trails image.

Star Trails over Anza

I really like the colors in the stars. I will be getting a piggy-back device so that I can take long-exposure very wide-field images without trails. This image has shown me what is possible up there in the mostly dark skies of the Anza valley.

Altogether, a successful meteor shower watch. I now know what to expect and can plan a more professional watching and imaging session for the next shower. Perhaps the Geminids in December. It will be chilly, but it could be nice.

Pre-Perseids

My wife’s sister and brother-in-law came out to Lake Riverside for dinner and meteor watching last night. We had blue-foot chicken that my wife had found at Surfas Restaurant Supply, found the morning after we watched “Battle Blue-foot Chicken” on The Food Network’s Iron Chef America. A bit of a coincidence, and the chicken was good. The thighs were not fat and plump like regular store chicken, and the flavor was good. Altogether a nice dinner.

I missed all the satellites from Heavens-Above, but that is not much of a loss. I got out the C-8 (with now non-functioning drive motor) and we had a small observing run. It included M57, the Ring Nebula, Albireo, and two very nice globular clusters in Scorpius, M80 and M4. M80 is a small, tight ball of stars. M4 is much larger and is visible with binoculars, as we discovered last night.

Meteor watching was OK, with my daughter reporting 16 seen over 2 hours from 10pm to midnight. I stayed up until 1am, but did not see too many more. The Milky Way was quite beautiful, and Andromeda was visible to the naked eye. Very pretty.

I hope to see more meteors tonight, and have another visual-only, manual observing run.

Mike Salway’s Jupiter Data

Mike Salway of Ice in Space posted some raw images of Jupiter he took from Australia on the Bad Astronomy / Universe today forum. Mike is the premier (in my view) planetary imager. He provided the exercise in two parts. He posted TIFFs on the first, and TIFFs and AVI sequences on the second.

My first attempt on the first image didn’t get the best results, but I did have fun using PixInsight to process the data.

Since Mike suggested Registax and Photoshop, I decided to be contrarian an use PixInsight, a currently free image processing tool. It has a pretty steep learning curve, but it provides great control over many image enhancing tools.

So, complying with your direction to share processing steps, here is what I did.

  1. I ignored the direction to align each color channel. I know this is being lazy, but there was no easy way to do it. In the final image, I don’t see differences in alignment for each channel.
  2. I combined each color image into an RGB image, factoring each color evenly.
  3. I extracted a luminance channel using “Extract Channel” in PixInsight. I used this to get to the best wavelets for the image, based on the tutorial on the PixInsight website.
  4. Finding the right parameters for the Atrous wavelet tranform, I was still not satisfied. I created a mask using a severe wavelets transform of the planet. Parameters were: Levels 8 & 9 (scale 128 & 256) bias 1.0 and 0.1 respectively. All other layers disabled. I then stretched the resulting with curves, eliminating the low end, to create the final mask.
  5. Using the mask, I applied wavelets to the entire image. The parameters were, by level: 1: off, 2: +12, 3: 0, 4:+0.8, 5:+0.6, 6: 0, 7: 0, 8:+0.3, Dynamic Range Adjust: High: 0.4
  6. Still using the mask but using it inverted, I darkened the limb of the planet. This improved the “flattened” look created by the sharpening process
  7. I applied an overall curves (no mask) that increased contrast and boosted saturation. The ability to manipulate hue and saturation in the curve dialog is a great strength of PixInsight.
  8. I used the GreyCStoration noise reduction algorithm to smooth out the noise in the image. I increased the scale of the noise to 1.2 pixels, all other parameters were default.
  9. I saved the file as a FIT (PixInsight works in 32 bit floating point, so saving in FIT saves all the data) and a TIFF for Photoshop (16 bit integer).
  10. I saved the file as a PNG from Photoshop. No manipulation with that tool. I tried an high-pass filter, but it didn’t really help.

Here is the final result:

Mike Salway Jupiter try #1

Unfortunately, while I felt that I got the contrast enhancement done well to bring out the clouds, it lacks color and has an over processed look.

For version #2, I took an approach that let to a much more subtle version.

  1. Convert the tiffs to grayscale, save as FITS
  2. Open in CCDStack, align the central region
  3. Open aligned images in PixInsight, color combine with equal weighted colors
  4. Apply a curves transform to increase the contrast, also use the unique ability in PixInsight to enhance saturation with a curve
  5. Apply a Weiner Deconvolution using the standard settings
  6. Adjust contrast with curves
  7. Color balance with the histogram tool
  8. Another Weiner deconvolution, using Std Dev of 1.5
  9. Yet another curves to add a “roundness” to the planet
  10. In Photoshop, save as a JPEG at 150% of original size

Version #2 (resized in HTML for the blog)

Mike Salway Jupiter try #1

For this third and final version, as in my other attempts, I relied almost exclusively on PixInsight. Most of the processing below was done in a 64-bit space. While the original data doesn’t have that granularity, processing with this precision eliminates any rounding errors. Hey, it might not make any difference in this case, but it is cool knowing it can be done.

Here was my process:

  1. Open each image in PixInsight, change to grayscale, save as FITS files
  2. Open in CCDStack, align based on central region, save again
  3. Color combine in PixInsight, using LRGB combine (w/o L) and an even 1/1/1 R/G/B balance
  4. Apply a curves transform, generally darkening the image
  5. Perform a Weiner deconvolution, 2.75 st. dev, 1.8 shape. This brings out the features in the planet’s clouds.
  6. Another modest curves to adjust contrast and a curve on color saturation (a PixInsight feature) to boost the color.
  7. Another modest Weiner deconvolution, 1.5 st. dev and 1.75 shape
  8. A very modest GREYCstoration noise reduction (0.2 magnitude)

The third and final image looks like this:

Mike Salway Jupiter try #3

It was great fun working with data the quality of Mike’s. I think it may be time to upgrade my planetary camera. The ToUCam just isn’t delivering the image quality, and now I am spoiled.

New Baby!

On June 2, 2007 at 12:23pm PDT, our daughter was born in St. John’s Hospital in Santa Monica. She was a bit early, 3lbs 10oz, 18 inches. Mother and daughter have done well over the last three weeks.

I’ve been short on sleep as you might imagine. I’ve had some ideas for posts, but no energy. Perhaps over the next week…

Notes on Imaging M63

Back on May 12th, the Mar Vista Clear Sky Clock was predicting a nice evening, so I set out to do a night of imaging. After a bit of searching using The Sky and CCD Navigator, I settled on M63, the Sunflower Galaxy in Canes Venatici.

One of the major struggles was finding an object with an adequate guide star. The brighter the guide star, the more frequent the guiding using the AO-7, and the sharper the image. The brightest star in a usable location for M63 is magnitude 9.83 — really quite dim. It required 1 second guide exposures, longer than I have done with the AO-7 and the C-11.

The day was quite hazy, poor transparency in astro-speak. But poor transparency often is accompanied by good seeing so I went ahead anyway. I got the camera set-up, the scope aligned, focused, found the guide star, etc. and started imaging around 9:40pm.

I took 5-minute clear filter shots and 3 minute R/G/B shots binned 2×2. The poor transparency made the effects of the severe local light pollution worse. The shorter RGB shots reduce the impact of sky glow on the image.

The clear filter does not block the near infrared light (NIR) that a luminance filter would block. I have been using the clear filter based on comments I read to the effect that one can get good data in the NIR so I have been imaging with a clear rather than a luminance filter. That being said, in this case it is a moot point because I am also using a Hutech light pollution filter which blocks the NIR. In the process of writing this post, I found an article by Don Goldman concluding that, for galaxy imaging, one should use an L filter. But I digress.

The imaging went fairly easily. The temperature was stable, staying at about 50 degrees. If the temperature falls too fast, one needs to refocus frequently. I use Astrodon parfocal filters so I do not need to refocus between each filter. The galaxy transited at 10:51pm. I stopped imaging and took a set of flats. When I was rotating the camera to get the guide star back, I felt the clutches slip a little bit, so I needed to shut things down and restart the scope with a “last alignment.” The pointing was OK, but I forgot to turn PEC back on, so the guiding for the two clear images taken after the meridian flip was not that good. That was the one GRRRR moment during the image. I finished up late, after 2am, took the final set of flats and went to sleep.

It took a while to get around to processing the data. I left the following Monday for a week in Dublin for a business meeting, so there was no time to work with the data until I got back. Here are my processing steps:

  1. Apply darks, flats, and biases to all frames (there were 51, 18 clear, 11 each RGB, but I had to throw out one blue image because of a satellite pass) in Maxim DL
  2. Remove blooms from the images using Ron Wodaski’s bloom remover plug-in in Maxim DL. I tried to use the bloom remove in CCDStack, but it does not work for me.
  3. Aligned, sigma-rejected, and combined (summed) each set of clear and RGB images and then the final LRBG images using CCDStack.
  4. In PixInsight, I stretched the image to bring out the galaxy (a raw FITS file fresh from the camera need contrast stretching to make the object, stars, etc, visible). I used dynamic background extraction to extract the gradient from light pollution in each image and then used pixel match to subtract it from the base image. Note: If you are not careful and use the default 2x downsample in background extraction, you can end up with half sized final images. I tried the automatic background extraction, but the dynamic process worked much better.
  5. I then color combined the object in PixInsight. This involves creating a blank RGB image of the appropriate resolution, and using the LRGB combine process to create the combined image.
  6. Using the PixInsight histogram stretch, I balanced the colors in the image. This tool is very powerful in PixInsight. You can manipulate the histogram with great detail and with very detailed feedback as to what your changes are. It even tells you how many pixels you have clipped, so you can manage that finely as well.
  7. With the new HDRWaveletTransform process in PixInsight, I brought out the details of the galaxy, and then adjusted contrast with a curves adjustment. To do an effective transform, you need to mask the stars so they don’t get bloated. I created two masks: a star mask and a galaxy mast. I subtracted the galaxy mask from the star mask (the original included some details from the galaxy) and ended up with a decent mask.
  8. From here it was into Photoshop, where I applied a minor high-pass filter (8 pixels, blending mode Soft Light, opacity 41%), and adjusted saturation (+13) and color balance.
  9. Back to PixInsight for noise reduction. The newer GREYCstoration did not do a good job on the noise in the image. The ACDNR process, however, allowed me to focus the noise reduction on the color part of the image and did a very nice job without losing any details in the image.

Here is the final image:

M63 -- The Sunflower Galaxy

10fps vs. 30fps on a ToUcam Pro

In an excellent post on discussion thread in the Bad Astronomy / Universe Today forum, Mike (of IceInSpace) noted that the ToUcam Pro would compress data to achieve over 10 frames per second over its standard USB 1.1. This would degrade the quality of anything over this frame rate from the camera.

I happened to have discovered the fps control on my ToUcam over the weekend and took 2 avis of the Plato crater, one at 30 fps and the other at 10fps. This provides an excellent test case for this finding.

I have attached two jpgs, one from processing each of the AVIs in Registax. Both were processed in the same way:

  1. aligned with a single 256k box centered on the middle of the crater
  2. A reference shot of 50 frames was created and sharpened in wavelets
  3. The stack was limited to 60% and optimize
  4. The top 200 frames were selected and stacked
  5. The image was sharpened with wavelets 9.2/26.0/13.2
  6. Saved as TIFFs from Registax, JPGs and PNGs from Photshop, quality=80

No other adjustments were made to the images. The PNG images are below.

At 10 frames per second:

Plato Crater at 10fps

At 30 frames per second:
Plato Crater at 30fps

My first take is that the 30fps image has less noise. Seeing was not good, and that could be a major factor, since the improvements from faster frames could have overtaken the noise introduced from the compression in the camera.

TIFFs are available at the Observatorio de la Ballona FTP site.

Plato Craterlets

On the plane returing from the SAP Sapphire conference in Atlanta, I was reading Sky & Telescope. In the section on Exploring the Moon. This is a regular feature and I’d link to it, but they don’t carry the story on-line, or at least not yet. It seems that is the magazine only available to April and I was reading the May edition.

It was an interesting article about the discovery of small craters on the floor of the crater Plato. Plato is a large, flat-bottomed crater near the top of the Moon, in about the center. This discovery, the article said, started a great series of discoveries of other craterlets and even transient phenomenon. At the time, people believed that the craters on the Moon were volcanic; today we know they are impact craters as the Moon has been dormant for several billion years. Altogether fun to read about the craterlets and something to look forward to seeing.

I had that opportunity last night. Having arrived in late afternoon, I was home for dinner. After dinner, I was outside with my daughter playing and I noticed the Moon. Here was my opportunity. I opened the observatory and started up the scope. There were some high clouds, so I didn’t expect great seeing and therefore I did not worry too much about scope cool down.

The Moon was one day after the first quarter, with the terminator just past the mid-line. Observing with a 22mm eyepiece (127x magnification), Plato was nicely visible, just on the light side of the terminator. Plato looked flat at first, but at select moments of still air, a craterlet would appear. It was quite exciting to see something I had read about just hours earlier. I really only saw one craterlet. At higher magnification, using a 3x barlow, the poor seeing was much worse, and the craterlet winked in and out of visibility. I can understand why people saw these objects as transient phenomenon.

I had not planned to image that night, but the web cam was handy and easily set-up. I took one AVI of about 90 seconds. After processing in Registax, PixInsight, and Photoshop, I got the following result.

Plato Crater

There are six craterlets that I could find, four easily. These larger craterlets are from 1.7 to 2.4 kilometers in diameter.

The right side of the image is sharper because I had real trouble with Registax’s tracking and aligning except when I used a single, very large alignment box. That selected frames that were clear on the right, without regard to the clarity on the left. Hence, the right is clearer. In addition, I used sigma reject in Registax to reduce the effect of a bad pixel on my web camera.

Astrophotos vs. Reality

On the question of "does the photo look like the real thing" I don’t think there is an easy answer. With long-exposure astrophotos such as the Eskimo Nebula from Kitt Peak, this beautiful shot of NGC 1365 on APOD (credit SSRO), or even my shot on NGC 891, there is always an element of judgement in how the astrophotographer made it look. While you can set standards on star color so your color balance reflects the true spectral colors, judgement in processing I think plays a key role.

We can leave aside narrowband or non-visible electromagnetic raditaion images. Reality vs. perception for those is like guessing what things look like to a bee or a cat.

At the Advanced Imaging Conference in San Jose last November, I listened to a discussion between David Malin (of the Anglo-Australian Observatory) and Jerry Bonnell (of NASA/USRA and co-editor of APOD). The conversation had started from the question "Does the photo look like the real thing?" in reference to astrophotos.

Malin said that, if processed correctly, the image would be the same as if we were able to turn up the magnification and sensitivity of our eyes. If we were out in space in front of the Crab Nebula (M1), we would see what the image shows us.

Bonnell, on the other hand, said that we don’t really know what things would look like if we were there. The density of the light is such that were we close to an nebula, it could be so faint as to be invisible. That even "turning up our eyes" would not necessarily yield the same colors regardless of tuning based on spectra.

My apologies to Malin and Bonnell if I have mistated their positions — this is as I recollect it.

I tend to favor Bonnell’s opinion, although I am certainly not one to pick an argument with Malin! I know that when I put together an astronomical image, I do just that — put it together. Separate images for R/G/B and Luminance, darks and flats applied, sigma reject used to reduce noise and multiple images summed, finally combining into a full-color image. Then comes sharpening with high-pass filters, digitial development, and wavelets, all that affect the relative contrast of objects in the image.

While no data is added, with all these steps I cannot assert a connection to "what it really looks like." I try to make it look pleasing to the eye, but I don’t know if it is accurate. And unless we actually go out there, with probe or spaceship, I don’t think we will know what it looks like.

Certainly good science can be done and we can know many facts about astronomical objects. Visual perception, on the other hand, is so subjective I don’t think we can say what it looks like.