`CLEAN' has dominated deconvolution in radio astronomy since its invention, but has not been widely used in other disciplines. Its decomposition of the image into point sources is is often not appropriate for other types of image. In contrast, MEM has spread to many disciplines, probably because most of its justification is independent of the type of data to which it is applied.
The philosophy behind MEM is intriguing and may convince some of you about the objectivity of MEM (see Jaynes 1982 for an exposition of MEM from its inventor). For those of you who do not become acolytes, the practical differences between `CLEAN' and MEM may be more interesting.
`CLEAN' is nearly always faster than MEM for small and simple images for which its approach of optimizing a small number of pixels is more efficient. For typical VLA images, the break-even point comes at around a million pixels of brightness. For large, complicated images such as those of supernova remnants at high resolution (up to 100 million pixels), `CLEAN' is impossibly slow so an MEM-type deconvolution is mandatory.
`CLEAN' images are nearly always rougher than MEM images. This may be traced to the basic iterative scheme. In 'CLEAN'', what happens to one pixel is not directly coupled to what happens to its neighbors, save by the data constraints, so there is no mechanism to introduce smoothness. MEM couples pixels by minimizing the spread in their values, so the resulting images are smooth although the entropy term does not explicitly embody spatial information.
Both MEM and `CLEAN' fail on some types of structure. `CLEAN' usually makes extended emission blotchy, and may introduce coherent errors such as stripes. MEM copes poorly with point sources in extended emission. Both work quite well on isolated sources with simple structure, and can produce meaningful enhancement of resolution, though MEM does slightly better in most cases. Both do poorly on mildly resolved objects, a surprising result that was first demonstrated by Briggs (1995), and that was the motivation for investigating algebraic deconvolution.
Both MEM and CLEAN can behave problematically when interpolating at the inner edge of the sampled u,v plane. MEM tends to over-estimate the intensity of the broadest-scale emission (the positivity bias), whereas `CLEAN' tends to underestimate it.
Since MEM tries to separate signal and noise, it is necessary to know the noise level reasonably well. Also, as mentioned above, knowledge of the total flux density in the image helps considerably. Apart from this, MEM has no other important control parameters, although it can be helped enormously by specifying a default image. `CLEAN' makes no attempt to separate out the noise, so specification of the noise level is not required. The main control parameters are the loop gain , and the number of iterations , both of which are important in determining the final deconvolution.
The default image of MEM is a powerful way to introduce a priori information. The effect of the default image can be easily mimicked by `CLEAN': the default image is simply used as the starting point for the collection of `CLEAN' components. The use of a disk model for a planet is an example of the use of a default in `CLEAN'.
Both `CLEAN' and MEM perform better if either any bright point sources are removed beforehand or that the dirty images are constructed such that the such point sources are exactly registered at pixels. Without this latter registration, `CLEAN' attempts to construct a multi-component (i.e. extended) model of such sources to represent their positional offset. It is possible for the algorithm to correct itself by the use of negative `CLEAN' components, but its attempts to do this complicate the assessment of how well `CLEAN' is progressing. As point-source misregistration also creates difficult problems for positivity-constrained algorithms such as MEM, it is much better to choose image centers and pixel sizes to avoid it for the brightest compact features in any image.
1996 November 4