It is important to understand two things about deconvolution;

  • Deconvolution is "an ill-posed problem", due to the presence of noise in every dataset. This means that there is no one perfect solution, but rather a range of approximations to the "perfect" solution.
  • Deconvolution should not be confused with sharpening; deconvolution should be seen as a means to restore a compromised (distorted by atmospheric turbulence and diffraction by the optics) dataset. It is not meant as an acuity enhancing process.

Understanding the former two important points will make clear why the various parameters exist in this module.

Defect and singularity mask

First order of business for using the Decon module, is to generate an inverted defect and singularity mask. This mask should include all pixels we wish to deconvolve (green in the mask editor), and exclude all pixels that are not suitable (not green in the mask editor). Pixels that are not suitable are areas that contain aberrant data, no data, or data that is non-linear. Examples are hot pixels, dead pixels, defective sensors columns, over-exposing star cores or (more rare) highlights that have been non-linearly compressed by the sensor to fit into the dynamic range to prevent over-exposure. For your convenience, an AutoMask feature is available by means of the 'AutoMask' button (also launched upon opening the Decon module).

The AutoMask feature is able to generate a suitable mask in most cases by selecting 'Auto-generate mask'. A more conservative 'Auto-generate conservative mask' option is also available which refrains from masking out detail in the highlights as much. The latter may be useful if your dataset is quite clean and your acquisition instrument has a good linear response throughout the dynamic range including into the highlights. Alternatively, you may also launch the Mask editor to create (or touch up) a mask yourself.

Deconvolution is extremely sensitive to aberrant data, as it relies on all data to be "real" and (originally) linear, in order to undo the specified blur in that area of the image. Letting Decon deconvolve any aberrant data greatly impacts the immediate vicinity being deconvolved and virtually always leads to significant artefacts being generated.

The Point Spread Function (PSF)

A 3-panel image show the same spiral galaxy core with the left image not deconvolved, the middle deconvolved with more detail visible, and the right deconvolved with ringing artifacts visible.
Left: original, middle: deconvolved image with appropriate settings, right: deconvolved image with ringing artifacts due to an inappropriate (too high) choice for the Radius parameter.

The Deconvolution algorithm's task, is to reverse the blur caused by the atmosphere and optics. Stars, for example, are so far away that they should really render as single-pixel point lights. However in most images, stellar profiles of non-overexposing stars show the point light "smeared" out, yielding a core surrounded by light tapering off. Further diffraction may be caused by spider vanes and/or other obstructions in the Optical Tube Array, for example yielding diffraction spikes.

The point light's energy is scattered around its actual location, yielding the blur. The way a point light is blurred like this, is also called a Point Spread Function (PSF). Deconvolution is all about modelling this PSF, then finding and applying its reverse to the best of our abilities.

Atmospheric or lens-related blur is more easily modelled, as its behaviour and effects on long exposure photography has been well studied over the decades. 5 subtly different models are available for selection via the 'Primary Point Spread Function' parameter;

  • 'Gaussian' uses a Gaussian distribution to model atmospheric blurring.
  • 'Circle of Confusion' models the way light rays from a lens are unable to come to a perfect focus when imaging a point source (aka the 'Circle of Confusion'). This distribution is suitable for images taken outside of Earth's atmosphere or images where Earth's atmosphere did otherwise not distort the image. It may also be used succesfully on marginally oversampled datasets.
  • 'Moffat Beta=4.765 (Trujillo)' uses a Moffat distribution with a Beta factor of 4.765. Trujillo et al (2001) propose in their paper that this value (and its resulting PSF) is the best fit for prevailing Atmospheric turbulence theory.
  • 'Moffat Beta=3.0 (Saglia, FALT)' uses Moffat distribution with a Beta factor of 3.0, which is a rough average of the values tested by Saglia et al (1993). The value of ~3.0 also corresponds with the findings Bendinelli et al (1988) and was implemented as the default in the FALT software at ESO, as a result of studying the Mayall II cluster.
  • 'Moffat Beta=2.5 (IRAF)' uses a Moffat distribution with a Beta factor of 2.5, as implemented in the IRAF software suite by the United States National Optical Astronomy Observatory.

A three-panel image showing an excerpt of a Hubble Space Telescope dataset.
Even this noisy and heavily drizzled Hubble dataset can be corrected by the StarTools' Decon module at its native, drizzled resolution. Left: not deconvolved, middle: deconvolved, right: deconvolved and noise grain equalized.

The size (aka 'kernel size') of the chosen 'Primary Point Spread Function' is controlled by the 'Primary Radius' parameter. A good rule of thumb is to increase this value until ringing artefacts become noticeable, and then back off a little until it disappears again. An 'Enhanced Deringing' parameter is available than can further ameliorate ringing artefacts.

Deconvolution module interface with a guide star selected.
When choosing a star as PSF guide star and wishing to use a Dynamic Star Sample setting, make sure that the star is not masked out; masked out pixels are displayed in red.

Converging on an optimal solution is an iterative process in the Deconvolution module. In general, more iterations, controlled by the 'Iterations' parameter, will yield a better result but will take longer to compute. More iterations tend to yield diminishing returns. Different datasets may benefit from more or fewer iterations. You may wish to experiment on a smaller preview section to evaluate improvements before computing deconvolution of the entire image. Deconvolution in StarTools always converges on an optimal solution and does not destabilise as seen in other software, except when 'Error Diffusion' is set to a non-zero value.

A 'Secondary Point Spread Function' may be specified by clicking on a guide star. The Deconvolution module will then use the star as a guide to construct a suitable total PSF. Good star samples are stars that do not overexpose, but are not too dim, are closer to the center of the image and have a flat background. When a 'Secondary Point Spread Function' is provided, the total/final PSF used is a combination of that PSF modulated by the 'Primary Point Spread Function'. This allows you to create a final PSF that is tightly controlled by the ideal atmospheric profile (and its radius) as specified by the 'Primary Point Spread Function', while exhibiting a custom measure of deformity as seen in the selected star's PSF.

For example, to make Decon use the 'Secondary Point Spread Function' only, set the 'Primary Point Spread Function' to 'Circle of Confusion (No Atmosphere)' and specify a very large 'Primary PSF Radius'. As expected, smaller radii will start cutting off the 'Secondary Point Spread Function' in a circular fashion. For a gentler tapering off of the 'Secondary Point Spread Function', you can use, for example,a 'Gaussian (Fast)' profile for the 'Primary Point Spread Function'.

Uniquely, any star chosen as a 'Secondary Point Spread Function' can be made to iteratively deconvolve along with the image (by choosing one of the 'Dynamic' Star Sample settings). This effectively means that the deconvolution process deconvolves with an ever-changing total PSF. This mode can yield very good, even superior results, depending on the fidelity of the initial star sample. If this mode is selected and you are using a preview, make sure that the chosen star is included in the preview and falls well into the preview area - a message will be shown if this is not the case.

When choosing a star as PSF guide star and wishing to use a Dynamic Star Sample feature, make sure that the star is not masked out; masked out pixels are displayed in red if a guide star is set. Masked out stars (and thus their derivative PSF) will not iteratively deconvolve along with the image and are hence not suitable for this mode.

Understanding regularization

5 panel of a crop with various settings of deconvolution applied.
Left: No deconvolution. 2nd left: "ideal" stable solution. Middle: Moderate error diffusion. 2nd right: Aggresive error diffusion. Right: Unstable (too high Error Diffusion). Notice that at 100% zoom, the intelligent error diffusion is barely noticeble, however introduces subtly more detail and tighter stars (middle).

Deconvolution is exceptionally sensitive to noise; without something discerning between newly recovered detail and artefact, the compounding effect of multiple iterations of deconvolving noise will quickly end up in a noisy, artefacting mess. The process that discerns between artefact and detail is regularization.

In general, as opposed to any other software, regularization (and deconvolution as a whole) in StarTools is extremely adept at detecting and mitigating noise and artifact propagation, thanks to signal evolution Tracking. Regularization in StarTools is wholly driven by per-pixel SNR statistics gathered as you processed the image, thereby avoiding artefact development in low SNR areas, while guaranteeing maximum detail in higher SNR areas. In fact, this ability makes applying deconvolution later in your processing a good idea, as Decon will have more "up to date" SNR statistics to work with. The closer your image is to completion, the more settled per-pixel SNR measurements will be. The latter settled SNR-measurements can than be taken into account by the Regularization algorithm to yield the most appropriate results for your image.

5-panel at 200% zoom.
At 200% zoom, the error diffusion is revealed as a subtle noise/dithering pattern around the enhanced detail (middle), breaking the pyschovisual effect.

Throughout all this, Deconvolution still operates on the linear data, even though the end result is calculated for your stretched and (possibly) heavily processed image; eberything you did to the image is taken into account and compensated for. The mechanism responsible for this mathematical tour de force is signal evolution Tracking; decisions based on your stretched image are back propagated to the dataset when it was linear, re-calculated, then forward propagated to the heavily processed state your dataset is now in.

You can think of this procedure as undoing all changes you made since you started with linear data until the dataset is linear again, then making a modification to the dataset in its linear state, then redoing all those changes you made again - this time starting from modified linear data. It's a little bit like time travel and changing the past using knowledge about the future. This unique approach to regularization means that deconvolution in StarTools converges to an optimal solution that fits the detected noise levels - it will not - by default - destabilise with more iterations as often seen in other software.

Psychovisual trickery

A further innovation in StarTools' deconvolution algorithm, is its ability to tightly control destabilisation. It is possible to artificially limit StarTools' default advanced regularization stabilisation behaviour, by increasing the 'Error Diffusion' parameter from 0%. This will cause the deconvolution algorithm to cleverly exploit a quirk of the human visual system, which makes it so that noise in areas of high detail are harder to discern. By allowing the solution to destabilise only in those areas, more perceptual detail can be eeked out, without causing destabilisation to become noticeable. It should be noted that at zoom levels higher than 100%, the illusion falls apart, and the human eye will start detecting the diffused grain for what it is; destabilisation artefacts.

Lunar, planetary and solar

Deconvolution of Juipter before and after.
The Deconvolution module in StarTools is also exceptionally well suited to planetary, lunar and solar datasets.

Deconvolution of planetary, solar and lunar images can be achieved as well by switching 'Image Type' to 'Lunar/Planetary'. The difference between 'Deep Space' and 'Lunar/Planetary' mode, is the way reconstructed highlights are treated. In the case of the 'Deep Space' setting, reconstructed highlights are allowed to overexpose (like any over-exposing stars in your image). In other words, dynamic range of the entire image is not adjusted to accommodate the reconstructed detail. However, in the case of a 'Lunar/Planetary' image, reconstructed highlights are allocated additional dynamic range, as to not make them overexpose. Note that this assumes there are no prior over-exposing areas (such as bright stars) in the source image.

Planetary, solar and lunar images will require a much less aggressive de-ringing strategy, so the 'Enhanced Deringing' parameter can usually be safely set to 0%.