Minutes for meeting Tuesday, 17 Oct 2000 at 4:00pm EDT.
Date: 17 Oct 2000
Time: 4:00 pm EDT (2:00 pm Socorro, 1:00 pm Tucson)
Phone: (804)296-7082 (CV SoundStation Premier Conference phone).
Past ImCal minutes, etc on MMA Imaging and Calibration Division Page
"NSF is also pleased to be playing a leadership role in developing the international partnership for ALMA--the Atacama Large Millimeter Array." --remarks of Dr. Rita Colwell, Director, NSF, at the Dedication of the Green Bank Telescope, August 25, 2000.
Jaap Baars has produced version two of his 'Aspects of the Antennas for the Atacama Compact Array" . Darrel had some interesting comments. He notes that ''the baseline spec. for the ALMA interferometer local oscillator coherence (i.e. fast phase jitter, too fast to be corrected by calibration) is set such that 90% coherence is obtained between an interferometer pair at 950 GHz. Then if the same local oscillator system were to be used at 1.5 THz, the coherence would become 77%, while at 2.5 THz it becomes 48%. So, with the current specs, a system using even perfect antennas would be degraded to less than half the theoretical sensitivity when used as an interferometer with this local oscillator system at that frequency.'
Darrel further notes that the 'sensitivity calibration in single dish mode, such as made with some hot/hotter load system or whatever, has to be tied somehow to the sensitivity in interferometer mode, which will always be degraded by finite LO coherence compared to single dish sensitivity. The baseline LO coherence spec implies a 1% degradation at 293 GHz, which needs to be taken into account if an overall absolute calibration accuracy of 1% is to be obtained. '
Bryan and Simon are working on memos on a time-synchronized comparison of the transparency and interferometer phase measurements between Chajnantor and Pampa La Bola.
Please see Bryan's draft.
Time for another monthly report. The submitted version is here.
In response to Anneila's query, I've discussed this with you all and tried to understand her concern.
Jim forwarded to me Larry's reasoning on the question: Should the capacity for blanking subintegrations exist in the correlator computer?
>There are two situations in which blanking makes sense: (a)
>*Predictible* bad data, such as when the telescope is moving between
>sources or the subreflector is between positions. But this is always
>between integrations, not in the middle of one, so blanking of
>sub-integrations does not help. (b) *Unpredictible* events, like a
>hardware problem or interference. Only in case (b) does latency
>matter, and only then do you want to blank a sub-integration. Even
>so, it is only helpful if a sub-integration is very much shorter than
>a typical full integration. This is not likely for either the TI or
>for the array, since short integrations are usually desired.
>Furthermore, it is worth the trouble only if the unpredictible events
>are rather common, which I do not expect to be the case.
After discussing this with you, my understanding is this:
To effect blanking of a subintegration, the correlator would have to be told that blanking was necessary, perhaps through a bit on a bus. Alternatively, the correlator is not informed of the antenna location, but the online system, knowing that something was amiss and blanking called for, flags the data later. The correlator knows about the subintegrations but the online system only knows about the integrations handed along to it. In the limit of a much longer integration period than subintegration period, the latter system is less efficient.
For example, one antenna will always get to the source last. In a system which was worried about the data rate, we might prefer to flag the data at the correlator, which wouldn't let the data exit into the data stream. The astronomer would lose only the subintegration here. In the second sort of system, a flag would be applied afterward, and would affect the entire integration, not the subintegration. As I understand things, OVRO works the first way, in which flagging can occur on a subintegration in the correlator. The VLA works the second way, in which the system flags the integration. In the example, suppose we are fast switching between source and calibrator. We need to be fast enough so that we can freeze the atmosphere, but we need an integration sufficient to be able to see the source. For crosscorrelation, the minimum subintegration is 16 ms, so there will be at most 62 subintegrations in the second on source. We would probably want several integrations in the second; let's suppose ten so that six subintegrations went into one 96 ms integration. One of these might be lost waiting for the errant antenna. I can't see that flagging the subintegration makes much difference to the astronomer in this case. However, will this be true if he loses the entire Walsh cycle in a 650 GHz integration when the source is out of the tiny antenna beam for a few dozen ms? We need to think more about this, I think.
A system in which data rate is a driver will clearly prefer the first system, which doesn't let bad data into the datastream in the first place. Sensitivity plays a part also--if one must integrate longer then one will probably have many subintegrations per integration. Bryan noted that sensitivity to datarate may play a role in OVRO's decision to blank subintegrations. Larry points out that this is not the case for ALMA, which will normally have short integrations; including it adds a level of complexity in that the correlator must know on very short timescales of the status of antennas, LOs, all items which might cause a bad subintegration. At the VLA, which uses the second system, at least to my understanding, bad data does sometimes get through, so we have routines like QUACK, an annoying but not very pricey penalty.
In a system which is already very complex, and for which subintegrations are likely to be a hefty fraction of the integration period, a situation in which only full integrations may be flagged by the online system seems reasonable to me. But I worry about details, particularly for high frequency operation of the array.
a. Is my understanding of the system correct?
b. Does this make sense?
Steven Heddle reported:
14th September 2000 The first batch of results for the A array images CLEANed to 10,000 iterations have been posted. To aid the organisation of this page, and also reflecting the fact that the CLEAN results are hosted on another site, a single link to a new index page for the CLEAN results is provided below. The old CLEAN results are left here for the time being. The CLEAN parameters used are those of our best consensus, and will be posted soon. The B, C and E arrays are taxing me as to the registration of the convolved downsampled model and the deconvolved subimage for differencing purposes. However the scripts have basically been written, so once this is sorted out the results should follow pronto.
Time for another phone meeting??
Action Items 1 Aug2000
DECISION: Configurations--where are we? Next phone meeting plans...ACA/THzA?
DECISION: Implementation of 183 GHz WVR?
DECISION: Are the specs for a nutating secondary correct?
DECISION: What is the total power specification on the ALMA?
DECISION: What is the effect of 1/f noise in the HEMT amplifiers of SIS receivers upon our ability to combine total power and interferometric images into a faithful representation of the sky?
|May, 1-2||Site Development Discussions||Garching, Germany|
|May 15-17||Test Interferometer Planning||Tucson, Arizona|
|Jun, 12-16||2nd IRAM Millimeter Interferometry Summer School||IRAM Grenoble, France|
|June 19-20||Vertex Antenna PDR||Duisburg, Germany|
|Jun, 20-27||7th Synthesis Imaging Summer School||Socorro, USA|
|June 21-22||EIE Antenna PDR||Venice, Italy|
|October 13||ACC Meeting||Paris, France|
oct 22-29 | pasadena (annual dps meeting)
nov 3-7 | CV (readhead meeting)
nov 12-19 | marrakech, morocco (IAU site testing meeting)
nov 5-7 | CV (readhead meeting) ------