A deconvolution algorithm's task, is to reverse the blur caused by the atmosphere and optics. Stars, for example, are so far away that they should really render as single-pixel point lights. However in most images, stellar profiles of non-overexposing stars show the point light spread out across neighbouring pixels, yielding a brighter core surrounded by light tapering off. Further diffraction may be caused by spider vanes and/or other obstructions in the Optical Tube Array, for example yielding diffraction spikes. Even the mere act of imaging through a circular opening (which is obviously unavoidable) causes diffraction and thus "blurring" of the incoming light.
The point light's energy is scattered/spread around its actual location, yielding the blur. The way a point light is blurred like this, is also called a Point Spread Function (PSF). Of course, all light in your image is spread according to a Point Spread Function (PSF), not just the stars. Deconvolution is all about modelling this PSF, then finding and applying its reverse to the best of our abilities.
Traditional deconvolution, as found in all other applications, assumes the Point Spread Function is the same across the image, in order to reduce computational and analytical complexity. However, in real-world applications the Point Spread Function will vary for each (X, Y) location in a dataset. These differences may be large or small, however always noticeable and present; no real-world optical system is perfect. Ergo, in a real-world scenario, a Point Spread Function that perfectly describes the distortion in one area of the dataset, is typically incorrect for another area of that same dataset.
Traditionally, the "solution" to this problem has been to find a single, best-compromise PSF that works "well enough" for the entire image. This is necessarily coupled with reducing the amount of deconvolution possible before artifacts start to appear (due to the PSF not being accurate for all areas in the dataset).
Being able to use a unique PSF for every (X, Y) location in the image solves aforementioned problems, allowing for superior recovery of detail without being limited by artifacts as quickly.
The SVDecon module, makes a distinction between two types of Point Spread Functions; synthetic and sampled Point Spread Functions. Depending on the implicit mode the module operates in, synthetic, sampled, or both synthetic and sampled PSFs are used.
When no samples are provided (for example on first launch of the SVDecon module), the module will fall back on a purely synthetic model for the PSF. As mentioned before, this mode uses the single PSF for the entire image. As such the module is not operating in its spatially variant mode, but rather behaves like a traditional, single-PSF model, deconvolution algorithm as found in all other software. Even in this mode, its results should be superior to most other implementations, thanks to signal evolution Tracking directing artefact suppression.
A number of parameters can be controlled separately for the synthetic and sampled Point Spread Function deconvolution stages.
Atmospheric and lens-related blur is easily modelled, as its behaviour and effects on long exposure photography has been well studied over the decades. 5 subtly different models are available for selection via the 'Synthetic PSF Model' parameter;
Only the 'Circle of Confusion' model is available for further refinement when samples are available. This allows the user to further refine the sample-corrected dataset if desired, assuming any remaining error is the result of 'Circle of Confusion' issues (optics-related) with all other issues corrected for as much as possible.
The PSF radius input for the chosen synthetic model, is controlled by the 'Synthetic PSF Radius' parameter. This parameter corresponds to the approximate the area over which the light was spread; reversing a larger 'blur' (for example in a narrow field dataset) will require a larger radius than a smaller 'blur' (for example in a wide field dataset).
The 'Synthetic Iterations' parameter specifies the amount of iterations the deconvolution algorithm will go through, reversing the type of synthetic 'blur' specified by the 'Synthetic PSF Model'. Increasing this parameter will make the effect more pronounced, yielding better results up until a point where noise gradually starts to increase. Find the best trade-off in terms of noise increase (if any) and recovered detail, bearing in mind that StarTools signal evolution Tracking will meticulously track noise propagation and can snuff out a large portion of it during the Denoise stage when you switch Tracking off. A higher number of iterations will make rendering times take longer - you may wish to use a smaller preview in this case.
Ideally, rather than relying on a single synthetic PSF, multiple Point Spread Functions are provided instead, by means of carefully selected samples. These samples should take the form of. isolated stars on an even background that do not over expose, nor are too dim. Ideally, these samples are provided for all areas across the image, so that the module can analyse and model how the PSF changes from pixel-to-pixel for all areas of the image.
The Bin module puts you in control over the trade-off between resolution, resolved detail and noise.
Even the best optical systems will suffer from minute differences in Point Spread Functions (aka "blur functions") across the image.
Any samples you set, are stored in the StarTools.log file and can be restored using the 'LoadPSFs' button.
Pixels that fall inside the masked areas (designated "green" in the mask editor), are used during local PSF model construction.
Multiple samples provide a distortion model that varies per location in the image.
You can convert everything you see to a format you find convenient. Give it a try!