Traditional image processing software for astrophotography is fundamentally broken

In this example, brightness has been deliberately modified locally (top right), enhancing contrast. Unfortunatley, traditional software cannot know that area has a much hgher signal-to-noise ratio. As a result noise reduction treats all areas the same, destroying much detail.

Chances are you have used noise reduction at some stage. In astrophotography, the problem with most noise reduction routines, is that they have no idea how much worse the noise grain has become (or will become) in your image as you process(ed) it. These routines, have no idea how you stretched and processed your image earlier or how you will in the future. And they certainly have no idea how you squashed and stretched the noise component locally with wavelet sharpening or local contrast optimisation.

In short, the big problem, is that separate image processing routines and filters have no idea what came before, nor what will come after when you invoke them. All pixels are treated the same, regardless of their history (is this pixel from a high SNR area or a low SNR area? Who knows?). Current image processing routines and filters are still as 'dumb' as they were in the early 90s. It's still "input, output, next". They pick a point in time, look at the signal and estimated noise component and do their thing. This is still true for black-box AI-based algorithms; they cannot predict the future.

In astrophotography, the problem with most noise reduction routines, is that they have no idea how much worse the noise grain has become (or will become) in your image as you process(ed) it.

Without knowing how signal and its noise component evolved to become your final image, trying to, for example, squash visual noise accurately is fundamentally impossible. What's too much in one area, is too little in another, all because of the way prior filters have modified the noise component beforehand. The same is true for applying noise reduction before stretching (e.g. at the linear stage); noise grain is ultimately only a problem when it becomes visible, but at the linear stage this hasn't happened yet. The only reason then to apply any noise reduction at the linear stage, is if your software's algorithms cannot cope with noise effectively; and that is a poor reason for destroying (or blatantly inventing) signal so early on.

The separation of image processing into dumb filters and objects, is one of the biggest problems for signal fidelity in astrophotographical image processing software today. It is the sole reason for poorer final images, with steeper learning curves than are necessary. Without addressing this fundamental problem, "having more control with more filters and tools" is an illusion. The IKEA effect aside, long workflows with endless tweaking and corrections do not make for better images. On the contrary, they make for much poorer images, or do no longer reflect a photographic reality.

Now imagine every tool, every filter, every algorithm could work backwards from the finished image, tracing signal evolution, per-pixel, all the way back to the source signal? That's Tracking!