Traditional astrophotography image processing software is fundamentally broken

In this example, brightness has been deliberately modified locally (top right), enhancing contrast. Unfortunatley, traditional software cannot know that area has a much hgher signal-to-noise ratio. As a result noise reduction treats all areas the same, destroying much detail.

Chances are you have used a noise reduction routine at some stage. In astrophotography, the problem with most noise reduction routines, is that they have no idea how much worse the noise grain has become in the darker parts. They have no idea how you stretched and processed your image earlier. And they certainly have no idea how you squashed and stretched the noise component locally with wavelet sharpening or local contrast optimisation.

The separation of image processing into dumb filters and objects, is one of the biggest problems for signal fidelity in astrophotographical image processing software today.

In short, the big problem, is that separate image processing routines and filters have no idea what came before, nor what will come after when you invoke them. All pixels are treated the same, regardless of their history. Trying to, for example, squash noise this way in your final image, is like chasing your tail; you'll never get there. What's too much in one area, is too little in another, all because of the way prior filters have modified the noise component beforehand.

The separation of image processing into dumb filters and objects, is one of the biggest problems for signal fidelity in astrophotographical image processing software today. It is the sole reason for poorer final images, with steeper learning curves than are necessary.

But what if a noise reduction routine could work backwards from the finished image, and trace noise propagation, per-pixel, all the way back to the source signal? The things it could do! Indeed. Welcome to Tracking!


Success

Thank you!

We have received your message and will be in contact shortly.

×