Braun & Walterbos (1985) proposed a way to address the problem of incomplete short spacing information in the absence of other shortcomings in the visibility sampling. A least-squares fit to a matched functional form is used to analytically continue the background beneath the locations of extended sources. The technique is efficient and successful for this restricted problem where the confinement constraint can be applied effectively.
Hybrid techniques try to exploit the virtues of several algorithms simultaneously while avoiding their pitfalls. For example, the awkward but common problem of deconvolving compact structure on an extended background can be tackled by `CLEAN'ing the compact structure down to the level of the extended emission, followed by a MEM deconvolution of what remains. The component models of each method are then combined, restored, and added to the residuals.
A variant of this approach which is also effective for multi-pointing deconvolution problems consists of `CLEAN'ing the individual pointings at the full available resolution and forming the linear combination with appropriate weighting, while using MEM to simultaneously deconvolve the data at low resolution. These results are merged by extracting the inner Fourier transform plane of the MEM result and combining it (with appropriate normalization) with the outer Fourier transform plane of the `CLEAN' result and back-transforming. Such techniques offer considerable promise to general application, especially if their use can be streamlined.
It is ironic that, formally, more is known about the type of images generated by MEM than by `CLEAN' (see e.g., Narayan & Nityananda 1986), since `CLEAN' is rather more widely used. Indeed many criticisms of MEM arise because certain of its properties, such as the bias, can be analyzed. Schwarz's analysis of `CLEAN' is incomplete in that it does not address the interesting under-determined case in which there are fewer data than pixels. We hope that someday this problem might be investigated satisfactorily.
Although deconvolution algorithms are now as important in determining the quality of images produced by a radio telescope as the receivers, correlators and other equipment, they are far less well understood. A good description is that they are poorly engineered. Only further research and development of new and existing algorithms can redress this imbalance.
1996 November 4