Next: Experience with a Software Quality Process
Previous: The Grid Signal Processing System
Up: Software Systems
Table of Contents - Index - PS reprint


Astronomical Data Analysis Software and Systems VI
ASP Conference Series, Vol. 125, 1997
Editors: Gareth Hunt and H. E. Payne

The VLT Data Flow Concept

P. Grosbøl and M. Peron
European Southern Observatory,
Karl-Schwarzschild-Str. 2, D-85748 Garching, Germany

 

Abstract:

The size and complexity of the VLT require an integrated approach to the handling of science data associated with the operation of the observatory. In this paper, we present the VLT Data Flow concept: a strategy to fulfill the scientific requirements outlined in the VLT Science Operation Plan-namely, ensure optimal efficiency and a high, constant, and predictable data quality. The VLT Data Flow model has been designed following an object-oriented methodology. In the analysis phase, we were able to get a clear understanding of the problem domain, and in the design phase to partition the system into smaller parts with well identified responsibilities. Prototypes are being implemented, and will be tested on the New Technology Telescope (NTT). They will allow us to clarify the astronomical requirements, verify the system architecture, and determine whether new technologies, such as distributed object environments, are appropriate.

       

1. Introduction

Space experiments normally include software components for performing full calibration and reduction of the acquired data. This is less common for ground based observatories, due to the higher cost involved. However, when size and complexity become too great, even ground based observatories cannot be operated reliably without a global, consistent concept.

In this paper, we argue that the VLT exceeds the limit for facilities which can be managed in an ad hoc manner. An important part of the operational concept of an observatory is the flow of science data from the initial proposal to the reduced observations. The concept of a VLT Data Flow System and its architecture are discussed, including first considerations for their realization.

2. Why is a Data Flow Concept Needed?

The VLT is a very large, complex facility which consists not only of four 8m telescopes, but also of several smaller auxiliary telescopes for interferometry. Two additional facts complicate the operation further, namely: a) that the four telescopes have to be considered both as individual units and in any combination, since a combined focus is foreseen, and b) that several sites (e.g., Paranal, Chile and ESO Headquarters, Germany) will share operational duties.

Major ground based astronomical observatories have an operational life-time which exceeds 25 years. A facility like the VLT is likely to have at least this life-time to fully amortize the investment. This corresponds to several generations of electronics, computer hardware, and software. Thus, it must be expected that most of the control hardware and software components will be exchanged several times during the operational period.

The large investment made in the VLT by the ESO community places a large responsibility on ESO to operate it in the most efficient way. This can only be achieved by analyzing the system dependencies, and establishing a well defined work-flow which takes them in to account. The distributed nature of the operation makes this more difficult, and requires a global view based on basic astronomical requirements.

Finally, the observatory should not only work efficiently in a technical sense, but astronomers must also be able to use it in an optimal way. It will be difficult for normal users to develop a comprehensive overview of the operation, due to the relatively short observing time allocated to them. It is therefore important that a clear concept for the astronomical use is presented, so that observations can be specified in a simple but exact way, with predictable results.

3. What Will a Data Flow System Give?

The Data Flow System provides a top-down view of the flow of science data in the VLT environment, with emphasis on astronomical usage (Baade 1995). Several simple, but general concepts were derived by focusing on the scientific requirements.

The architecture, based on the astronomical tasks to be performed by the individual subsystems, makes the design more robust to changes over time (Peron et al. 1996). Clear responsibilities of the different components, together small interfaces between subsystems, improves the maintainability. Upgrading or replacement of components is also facilitated by the global view and its modularity.

4. General Concepts

An important issue for the Data Flow is to present a simple framework of concepts which both are easy for the standard user to grasp, and which hold the system together. The astronomer's view is as follows:

The main Data Flow subsystems are shown in Figure 1, where the general flow of science data is indicated. All along the flow, information is added to the ObservingBlocks so that a full record of events, from the original specification to the reduced data, is kept.

5. How Will it be Realized?

The analysis of the astronomical requirements was done by a mixed group of astronomers and software engineers to ensure a common view. An object-oriented methodology was applied since it provides a good mapping between the real world and software entities. The specific methodology, OMT, also gives a simple graphical representation of designs, which proved very useful in the discussions. An object-oriented approach also gives better modularity by strong encapsulation. The responsibilities of each component (i.e., class/object) are well defined.

The design concepts will be verified by prototyping on the NTT at La Silla. These implementations will test the general flow of information between subsystems and be used to analyze the merits of different choices. A baseline version will then be prepared for the VLT. At the same time, new technologies (e.g., OMG/CORBA and Java) will be evaluated.

6. Conclusions

A Data Flow concept is mandatory for the VLT due to its size and complexity. It provides the end user a simple view based on general concepts, like ObservingBlocks and ReductionBlock, which will yield a more homogeneous user interface and thereby help to increase efficiency. The tight Quality Control feedback loop will ensure that problems are detected early and corrected rapidly. The use of an object-oriented methodology has given a cleaner and more traceable system design. Finally, testing prototypes on the NTT is an important step in understanding the system behavior in a live, operational environment and in establishing a stable baseline for the VLT.

Acknowledgments:

The VLT Data Flow concept presented here is based on work and discussions in the VLT On-line Data Flow Working Group and the Data Flow Project Team in ESO. We would like to thank all people involved in this process.

References:

Baade, D. 1995, VLT-PLA-ESO-10000-0441, ESO

Ballester, P., Banse, K. & Grosbøl, P. 1997, this volume

Chavan, A. M. & Albrecht, M. A. 1997, this volume

Peron, M., Grosbøl, P. & van den Berg, R. J. G. 1996, in ASP Conf. Ser., Vol. 87, New Observing Modes for the Next Century, ed. T. Boroson, J. Davies, & I. Robson (San Francisco, ASP), 183


© Copyright 1997 Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, California 94112, USA

Next: Experience with a Software Quality Process
Previous: The Grid Signal Processing System
Up: Software Systems
Table of Contents - Index - PS reprint


payne@stsci.edu