------------------ Charge 3: "Help the science IPT plan their study of the impact of calibration on a handful of the most challenging major science goals, in particular by providing the ASAC's views on the types of projects you feel are the most challenging from a calibration point of view. Review and comment on the science IPT report when finished." Documents considered: I. Project Book (1998): the main driver for 1% calibration is high fidelity imaging/mosaics. Note that this is more a requirement on the stability than the absolute calibration, although it could be construed as an absolute requirement in the case of adding data from different arrays, or from the ACA, unless some sort of cross-calibration scheme can be devised using the data itself (eg. self-cal using common models/baselines). II. Project Plan (2003): science requirements and specs Section 3.3.9 in the project plan gives specs: - visibility amplitude fluctuations <1% at <300GHz; <3% at >300GHz - 0.1% polarimetric measurements requires L-R gain fluctuations to be < 5e-4 in 5min (ie. between calibrations) Section 3.5 gives more specs: - flux scale should be accurate to <1% at <300GHz; <3% at >300GHz Section 2: level 1 science goals 1. detect CO/C+ in milkyway at z=3 in 24hrs. calibration requirements are not very strict here, 3-5% is fine for things like excitation analysis or SED constraints. 2. Gas kinematic imaging in proto-planetary disks at 150 pc: search for gaps etc... Again, 3-5% is probably OK. 3. 0.1" resolution imaging with high fidelity (1000/1). This could drive the calibration to 1%, although often one can self-cal in this situation. Overall, the three level 1 goals do not fundamentally require 1% absolute calibration, although 1% stability may be a requirement. Note that the three memos by Carilli, Dutrey, Bachmann essentially cover the physics behind the 3 level 1 goals in some detail, and in all three cases the conclusion is that 3-5% calibration is probably OK. I do not think more detail is required here. III. DRSP: the DRSP gives us a broader view of potential projects, although they are not necessarily the high profile programs that should be driving the telescope design. The Hogerheidje report concludes: - Just few programs (planetary) with 1-3% absolute requirement - 30% need repeatability to 1-3% - 50% need band-to-band relative calibration to 1-3% IV. Planetary science: there are a number of programs in planetary science that really are enabled by 1% calibration. These deal mostly with studying properties of planetary or smaller body (moons, asteroids) regoliths, surface maps, polar regions Some examples that need 1% absolute cal (from Brian Butler -- see details below): - titan surface: liquid hydrocarbons vs. water ice -- 3% calibration will be insufficient to constrain the index of refraction - SO2 in venusian atmosphere as probe of volcanic activity. - Thermal state of Mercury interior: test for molten core via absolute energetics Overall, according to Brian, the 3% absolute spec precludes some interesting science. But again, he doesn't see this science as being so compelling as to dictate unrealizable specs on the telescope. V. There is one potentially interesting issue, which is cross-calibration between Planck 350 to 850 um bands and ALMA. The Blain DRSP program talks about 10000 point sources in Planck. I reckon Planck should have much better than 1% internal calibration, and the CMB sets a pretty accurate absolute scale. In terms of what the IPT can do to help, I can think of two things: 1. Generate report on effect of 3% errors on high fidelity imaging (this may already be in progress by Japanese?). 2. Write-up in a memo or some such the planetary science programs that are lost going to 3%. Or we could just include these in the ASAC report, although it might be good to have some figures? -------------------------- FROM BRIAN BUTLER, Sept 20, 2004 chris, attached is a stroll down memory lane, for solar system calibration requirements. there are no details in any of these older writeups - i think it was simply based on the experience of the folks in the groups. to work up a case for mm-specific things would be a bit of work. let me give two examples from cm-wavelengths, though, that illustrate the problem. 1 - Surface composition of Titan It is now clear from IR imaging of the surface that there are regions of bright and dark terrain. Are these variously exposures of clean water ice ("continents") and dark liquid hydrocarbon? We don't know. Radar observations indicate extensive liquid hydrocarbon deposits in equatorial regions, but little is known beyond about 5 deg. latitude. Radio observations could potentially discriminate between these two end-member surface types by making accurate measurements of the brightness temperature and then deducing the bulk dielectric from the inferred emissivity - which would be 1.8 or so for liquid hydrocarbon, 3.2 or so for solid water ice. But, unfortunately, an overall flux density scale uncertainty of 3% (currently the case at X-band) implies an uncertainty on the dielectric of about +- 0.8, making it very difficult to determine where there is potentially water ice and where liquid hydrocarbon. While for mm wavelengths the surface of Titan is mostly obscured, the same holds true for all icy (and rocky, for that matter) surfaces - 3% uncertainty in flux density scale will mean large uncertainties in derived bulk dielectric. This means that you have to be able to do extremely accurate polarization work to get this quantity (better than .1% polarization, sometimes in the presence of a nasty confusing source [Saturn, Jupiter, etc...]). 2 - Abundance of SO2 in the Atmosphere of Venus The abundance of SO2 in the lower atmosphere of Venus seems to have been changing since Pioneer measurements (unless there are measurement errors, of course). SO2 in the lower atmosphere could be an indicator of current volcanic activity. Since we know, fairly well, the atmospheric temperature structure, and the bulk abundances (CO2, SO2 - at least that above the cloud deck, H2SO4 vapor, and HCl), we could use accurate radio wavelength observations to measure the below cloud SO2 abundance. This is, however, completely limited by our ability to calibrate the flux density scale. With current levels (3% from C- to U-band, 5% at K-band, 7% at Q-band), only crude estimates can be made - errors something like 50 ppm or so. With 1% calibration this would come down to 10 ppm or so - a much more interesting level. >From MMA Memo 2 (de Pater, Berge, Muhleman, Schloerb - I don't have a date on this, and it's not even listed in the ALMA Memo list [curiously], but it is likely from 1983, given the discussion about Titan therein): We think the most valuable observations the instrument will offer us, which cannot be done at any of the existing or proposed telescopes as of now, are accurate (<1% accuracy) center-to-limb observations of planets... Note: For planetary observations we do not believe in either self-calibration or mosaicing - we need, e.g., accuracies better than 1% in center-to-limb observations which might be obtained in single maps, but likely not in mosaics. >From the original writeup from the MMA Solar System Working Group at the Tucson science meeting in October 1995 (Schloerb, Butler, Gurwell, Lovell, Muhleman, Palmer, de Pater): What capabilities are required in order to achieve these objectives: High Quality Imaging The MMA must be designed and built to achieve accurate and reliable images within a short period of time. A specification of quality which is sometimes called fidelity in imaging may be stated quantitatively by requiring that the difference between the image formed by the instrument and a true image of the source be less than 1 per cent. We note that this differs significantly from the more typically defined quantity of the dynamic range of the image, which represents the weakest detectable feature in the presence of a strong feature but makes no statement about the reliability of the map of a complex extended source. The requirement for high quality imaging is very important for allowing accurate comparison of the brightness in different regions of maps, and exceedingly important for allowing accurate comparison of maps made at different times in order to measure and quantify temporal changes in temperature and or species abundances. It is important to remember that this high quality imaging requirement pertains to observing situations that will be uncommon for the MMA. Planetary brightness temperatures are very high (100K - 300 K), and the variations that are interesting to observe are at the brightness level of one or two Kelvin. Thus detection of this brightness level is not the issue. Rather the concern is for developing the ability to make accurate maps in which 1% features may be distinguished and believed. The revised writeup (same folks, July 1998) had the same wording. ------------------------