The best kept secret amongst signal processing purists

StarTools monitors your signal and its noise component, per-pixel, throughout your processing (time). It sports image quality and unique functionality that far surpasses other software. Big claim? Let us back it up.

Your signal and its noise component

An overstretched, noisy image
When you stretch your dataset, you will notice noise grain becoming visible quickly in the darker areas..

If you have ever processed an astrophotographical image, you will have had to non-linearly stretch the image at some point, to make the darker parts with faint signal visible. Whether you used levels & curves, digital development, or some other tool, you will have noticed noise grain becoming visible quickly.

You may have also noticed that the noise grain always seems to be worse in the darker areas than the in brighter areas. The reason is simple; when you stretch the image to bring out the darker signal, you are also stretching the noise component of the signal along with it.

And the former is just a simple global stretch. Now consider that every pixel's noise component goes through many other transformations and changes as you process your image. Once you get into the more esoteric and advanced operations such as local contrast enhancements or wavelet sharpening, noise levels get distorted in all sorts of different ways in all sorts of different places.

The result? In your final image, noise is worse in some areas, less in others. A "one-noise-reduction-pass-fits-all" no longer applies. Yet that's all other software packages - even the big names - offer.

Traditional image processing software for astrophotography is fundamentally broken

In this example, brightness has been deliberately modified locally (top right), enhancing contrast. Unfortunatley, traditional software cannot know that area has a much hgher signal-to-noise ratio. As a result noise reduction treats all areas the same, destroying much detail.

Chances are you have used noise reduction at some stage. In astrophotography, the problem with most noise reduction routines, is that they have no idea how much worse the noise grain has become (or will become) in your image as you process(ed) it. These routines, have no idea how you stretched and processed your image earlier or how you will in the future. And they certainly have no idea how you squashed and stretched the noise component locally with wavelet sharpening or local contrast optimisation.

In short, the big problem, is that separate image processing routines and filters have no idea what came before, nor what will come after when you invoke them. All pixels are treated the same, regardless of their history. Current image processing routines and filters are still as 'dumb' as they were in the early 90s. It's still "input, output, next". They pick a point in time, look at the signal and estimated noise component and do their thing.

In astrophotography, the problem with most noise reduction routines, is that they have no idea how much worse the noise grain has become (or will become) in your image as you process(ed) it.

Without knowing how signal and its noise component evolved to become your final image, trying to, for example, squash noise accurately is fundamentally impossible. What's too much in one area, is too little in another, all because of the way prior filters have modified the noise component beforehand. The same is true for applying noise reduction before stretching; noise grain is only a problem when it becomes visible and this hasn't happened yet.

The separation of image processing into dumb filters and objects, is one of the biggest problems for signal fidelity in astrophotographical image processing software today. It is the sole reason for poorer final images, with steeper learning curves than are necessary. Without addressing this fundamental problem, "having more control with more filters and tools" is an illusion. The IKEA effect aside, long workflows with endless tweaking and corrections do not make for better images. On the contrary, they make for much poorer images.

Now imagine every tool, every filter, every algorithm could work backwards from the finished image, tracing signal evolution, per-pixel, all the way back to the source signal? That's Tracking!

Tracking is the solution

In this example, brightness has been deliberately modified locally (top right), enhancing contrast. Yet StarTools still knows exactly where the noise grain is, even in images that have had their brightness modified locally like this. It does not touch the dark detail in the galaxy, but effectively noise reduces other dark parts that have visible noise grain.

Tracking is time travel

Tracking in StarTools makes sure that every module and algorithm can trace back how a pixel was modified at any point in time. It is the Tracking engine's job to allow modules and algorithms "travel in time" to consult data and even change data (changing the past), and then forward-propagate the changes to the present.

The latter sees the Tracking module re-apply every operation made since that point in time, however with the changed data as a starting point; changing the past for a better future. This is effectively signal processing in three dimensions; X, Y and time (X, Y, t).

The power of 3D (X, Y, t) signal processing

Correct deconvolution of extremely noisy, stretched, locally contrast-enhanced (top left) data. No further masks (other than simple auto-generated star mask), local supports or selective processing was performed. Noise grain is correctly identified and ignored. Only areas with sufficient SNR are enhanced.

Deconvolution; an example

This remarkable feature is responsible for never-seen-before functionality that allows you to, for example, apply correct deconvolution to heavily processed data. The deconvolution module "simply" travels back in time to a point where the data was still linear (deconvolution can only correctly be applied to linear data!). Once travelled back in time, deconvolution is applied and then Tracking forward-propagates the changes. The result is exactly what your processed data would have looked like with if you had applied deconvolution earlier and then processed it further.

A side-by-side image of M42's core
Signal evolution Tracking allows for many more enhancements over traditional software. For example color constancy (right), effortlessly visualizing features with similar chemical/phsycial properties, regardless of brightness.

Sequence doesn't matter any more, allowing you to process and evaluate your image as you see fit. But wait, there's more!

Deconvolution; an example that gets even better

Time travelling like this is very useful and amazing in its own right, but there is another major, major difference in StarTools' deconvolution module;

Because you initiated deconvolution at a later stage than normally can be the case, the deconvolution module can take into account how you further processed the image after it normally should have been invoked. The deconvolution module now has knowledge about a future it normally is not privy to in any other software. Specifically, that knowledge of the future, tells it exactly how you stretched and modified every pixel - including its noise component - after the time its job should have been done.

An example of deconvolution in StarTools
200% zoomed crop input and output. Left: original, right: Decon result. Thanks to signal evolution Tracking, and despite stretching, local dynamic range optimization and noise presence, Decon is able to recover the finest details without introducing artifacts, or the need for support masks or manual intervention.

You know what really loves per-pixel noise component statistics like these? Deconvolution regularization algorithms! A regularization algorithm suppresses the creation of artefacts caused by the deconvolution of - you guessed it - noise grain. Now that the deconvolution algorithm knows how noise grain will propagate in the "future", it can take that into account when applying deconvolution at the time when your data is still linear, thereby avoiding a grainy "future", while allowing you to gain more detail. It is like going back in time and telling yourself the lottery numbers to today's draw.

What does this look like in practice? It looks like a deconvolution routine that just "magically" brings into focus what it can. No sub-optimal local supports needed, no subjective luminance masks needed, no selective blending needed. There is no exaggerated noise grain, just enhanced detail; objectively better results, in less time.

And all this is just what Tracking does for the deconvolution module. There are many more modules that rely on Tracking in a similar manner, achieving objectively better results than any other software, simply by being smarter with your hard-won signal. This is what StarTools is all about.


In StarTools, your signal is processed (read and written) in a time-fluid way. Being able to change the past for a better future not only gives you amazing new functionality, changing the past with knowledge of the future also means a much cleaner signal. Tracking always knows how to accurately estimate the noise component in your signal, no matter how heavily modified. Unnecessary subjectivity is - literally - taken out of the equation, yielding vastly better results in less time.