next up previous contents external
Next: The problem of short spacings Up: The `CLEAN' algorithm Previous: The use of boxes: finite support

Number of iterations and the `CLEAN' loop gain

 

The number of `CLEAN' subtractions tex2html_wrap_inline957 and the loop gain tex2html_wrap_inline927 determine how deep the `CLEAN' goes. In particular, for a point source the residual left on the dirty image is tex2html_wrap_inline961 . Hence, to minimize the number of `CLEAN' subtractions (and so to minimize the CPU time) tex2html_wrap_inline927 should be unity; one then finds, however, that extended structure is not well represented in the corresponding `CLEAN' image. In typical VLA applications a reasonable compromise lies in the range 0.1 tex2html_wrap_inline965 0.25. (Note that this dependence of the `CLEAN' image on the loop gain demonstrates the multiplicity of solutions to the convolution equation.) Lower loop gains may be required if the u,v coverage is poor, but the improvements in deconvolution for tex2html_wrap_inline971 0.01 are generally minimal. If in any doubt, then it is wise to experiment (e.g., by decreasing tex2html_wrap_inline927 and increasing tex2html_wrap_inline957 ). One exception to the use of low loop gain is in the removal of confusing sources; it is preferable to remove them with high loop gain, as their structure is usually not of interest.

The choice of the number of iterations depends upon the amount of real emission in the dirty image. One should aim at transferring all brightness greater than the noise level to `CLEAN' components (some implementations of `CLEAN' allow one to specify a lower intensity limit to the components instead of tex2html_wrap_inline957 ). `CLEAN'ing deep into the noise is usually a waste of time unless you specifically wish to analyze the extended, low surface-brightness emission (but see the `CLEAN' beam).

Examination of the list of `CLEAN' components, and, in particular, of the behavior of the accumulated intensity in the model, is useful in detecting divergence; sometimes the accumulated intensity diverges. As discussed above, divergence of the Högbom `CLEAN' is always due to a computational problem. Possible culprits are the gridding process, aliasing, and finite precision arithmetic. In the case of the Clark or the Cotton-Schwab algorithms, the truncated dirty beam patch that is used in the minor cycles of these algorithms must violate Schwarz's conditions. Therefore both may be subject to instability or divergence if the minor cycle is prolonged unduly.


next up previous contents external
Next: The problem of short spacings Up: The `CLEAN' algorithm Previous: The use of boxes: finite support



1996 November 4
10:52:31 EST