From grisebac@isoit109.BBN.HP.COM Fri Jul 2 11:04:22 1993 X-VM-VHeader: ("From:" "Sender:" "Resent-From" "To:" "Apparently-To:" "Cc:" "Subject:" "Date:" "Resent-Date:") nil X-VM-Bookmark: 22 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] [nil nil nil nil nil nil nil nil nil nil nil nil "^From:" nil nil nil]) Organization: Hewlett-Packard GmbH Newsgroups: sci.data.formats From: grisebac@isoit109.BBN.HP.COM (#Eberhard Grisebach) Subject: Re: CGM format Date: Thu, 1 Jul 1993 10:23:11 GMT D.B. Arnold/P.R. Bono: CGM and CGI, Metafiles and Interface Standards for Computer Graphics. Computer Graphics Metafile for the Storage and Transfer of Picture Description Information. ISO 8632, Parts 1-4 +---------------------------------------------------------------------------+ |Eberhard Grisebach | +---------------------------------------------------------------------------+ |HEWLETT-PACKARD GmbH | Internet: eha@hpbbn.bbn.hp.com | |Herrenberger Strasse 130 | | |71034 Boeblingen | | |Germany | | |Phone (Germany) 07031/14-3497 | telnet : 778-3497 | |FAX (Germany) 07031/14-3924 | | +---------------------------------------------------------------------------+ From grisebac@isoit109.BBN.HP.COM Fri Jul 2 11:04:40 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["930" "Thu" "1" "July" "1993" "10:35:42" "GMT" "#Eberhard Grisebach" "grisebac@isoit109.BBN.HP.COM " nil "20" "Re: Rich Text Format" "^From:" nil nil "7"]) Organization: Hewlett-Packard GmbH Newsgroups: sci.data.formats From: grisebac@isoit109.BBN.HP.COM (#Eberhard Grisebach) Subject: Re: Rich Text Format Date: Thu, 1 Jul 1993 10:35:42 GMT Microsoft Word Technical Reference For Windows and OS/2 Microsoft Press One Microsoft Way, Redmond, Washington 98052-6399 ISBN 1-55615-290-6 +---------------------------------------------------------------------------+ |Eberhard Grisebach | +---------------------------------------------------------------------------+ |HEWLETT-PACKARD GmbH | Internet: eha@hpbbn.bbn.hp.com | |Herrenberger Strasse 130 | | |71034 Boeblingen | | |Germany | | |Phone (Germany) 07031/14-3497 | telnet : 778-3497 | |FAX (Germany) 07031/14-3924 | | +---------------------------------------------------------------------------+ ~ From ph@physiology.oxford.ac.uk Fri Jul 2 16:48:09 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["1185" "Fri" "2" "July" "1993" "17:39:51" "GMT" "Patrick Haggard" "ph@physiology.oxford.ac.uk " nil "23" "VIFF -> HDF & xdataslicer" "^From:" nil nil "7"]) Newsgroups: comp.soft-sys.khoros,sci.data.formats Originator: ph@motor.physiol Organization: Physiology Department, Oxford University, Oxford, UK. From: ph@physiology.oxford.ac.uk (Patrick Haggard) Subject: VIFF -> HDF & xdataslicer Date: Fri, 2 Jul 1993 17:39:51 GMT Sorry for the cross posting, folks, but I need your help. I have multiple MRI images, which I process in khoros, and want to try reformatting in an arbitrary plane. I'm trying to used NCSA's xdataslicer, which I happen to think is suboptimal. If anyone knows of any alternative for planar reformatting, I'ld be interested to know. Otherwise, I need a reliable way to get khroros multiband VIFF's (1 band per MRI slice) into NCSA's hdf/sds format. I have tried going through raster8, but NCSA's r8tohdf utility makes incorrect assumptions about header size, and the images are not correctly aligned. What size does r8tohdf expect the headers to be? Alternatively, can anyone suggest reliable ways to get multiband image data into xdataslicer? Thanks -------------------------------------------------------------------------- Patrick Haggard Email (WORLD): ph@physiol.ox.ac.uk University Laboratory of Physiology Email (JANET): ph@uk.ac.ox.physiol Parks Road, Tel. (0865) 272116 Oxford, OX1 3PT Fax. (0865) 272469 England -------------------------------------------------------------------------- From dilg@xongmao.ncsa.uiuc.edu Tue Jul 6 22:06:58 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["2364" "Tue" "6" "July" "1993" "16:39:53" "GMT" "Doug Ilg" "dilg@xongmao.ncsa.uiuc.edu " nil "51" "Re: VIFF -> HDF & xdataslicer" "^From:" nil nil "7"]) Newsgroups: comp.soft-sys.khoros,sci.data.formats Organization: University of Illinois at Urbana From: dilg@xongmao.ncsa.uiuc.edu (Doug Ilg) Subject: Re: VIFF -> HDF & xdataslicer Date: Tue, 6 Jul 1993 16:39:53 GMT ph@physiology.oxford.ac.uk (Patrick Haggard) writes: >Sorry for the cross posting, folks, but I need your help. I have >multiple MRI images, which I process in khoros, and want to try >reformatting in an arbitrary plane. I'm trying to used NCSA's >xdataslicer, which I happen to think is suboptimal. If anyone >knows of any alternative for planar reformatting, I'ld be >interested to know. Otherwise, I need a reliable way to get >khroros multiband VIFF's (1 band per MRI slice) into NCSA's >hdf/sds format. I have tried going through raster8, but >NCSA's r8tohdf utility makes incorrect assumptions about header >size, and the images are not correctly aligned. What size >does r8tohdf expect the headers to be? r8tohdf expects there to be no header on the image. It just wants to see raw 8-bit data. If you strip the headers off your image data, it should work okay, but I don't think that's really what you want to do, if you want to use XDataslice. You really need to use the SDS interface, as you said above. >Alternatively, can >anyone suggest reliable ways to get multiband image data into >xdataslicer? I don't know if anyone has written a converter for exactly that purpose, but it shouldn't be too terribly difficult. If you know the overall dimensions of your complete data cube, you can use the latest full release of HDF (3.2r4) by calling any setup routines you need (DFSDsetdims, DFSDsetNT, DFSDstartslice, etc.) then reading in each MRI slice and using DFSDputslice to write the data. A final call to DFSDendslice finishes the job. The main drawback of the method I described above is that you must set your dimensions and write out the slices in such a way as to simulate contiguous serial writes. In other words, 40 slices of 50 rows by 30 columns would require a dimension array (for DFSDsetdims) like this: int32 dims[3] = {40, 50, 30}; and the slices should be written out in order from 0 to 39. If it turns out that you can't live with those restrictions, you'll have to go with the new beta release of HDF (3.3b1) and use either the hyperslab routines that have been added or a new-style SDS with an "unlimited" dimension (ala netCDF). It's really not as difficult as my description probably makes it sound :-) -Doug Ilg Hughes STX dilg@ulabsgi.gsfc.nasa.gov Voice: (301) 794-5362 FAX: (301) 306-1010 From hartmann@rulcvx.LeidenUniv.nl Wed Jul 7 11:22:54 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["250" "Wed" "7" "July" "93" "07:25:48" "GMT" "hartmann@rulcvx.LeidenUniv.nl ()" "hartmann@rulcvx.LeidenUniv.nl ()" nil "8" "Re: HDF to GIF conversion" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Nntp-Posting-Host: rulcvx.leidenuniv.nl Organization: CRI, institute for telecommunication and computerservices. From: hartmann@rulcvx.LeidenUniv.nl () Subject: Re: HDF to GIF conversion Date: Wed, 7 Jul 93 07:25:48 GMT I have no idea what HDF format is. I convert my own data files to GIF all the time, using a utility called RAW2GIF from a whole collection of GIF utilities, available from many FP sites as GIFLIB. The sources (C) are included too. -- dap hartmann From dilg@xongmao.ncsa.uiuc.edu Wed Jul 7 11:23:19 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["630" "Wed" "7" "July" "1993" "14:35:32" "GMT" "Doug Ilg" "dilg@xongmao.ncsa.uiuc.edu " nil "16" "Re: HDF to GIF conversion" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Organization: University of Illinois at Urbana From: dilg@xongmao.ncsa.uiuc.edu (Doug Ilg) Subject: Re: HDF to GIF conversion Date: Wed, 7 Jul 1993 14:35:32 GMT hartmann@rulcvx.LeidenUniv.nl () writes: >I have no idea what HDF format is. >I convert my own data files to GIF all the time, using a utility >called RAW2GIF from a whole collection of GIF utilities, >available from many FP sites as GIFLIB. The sources (C) are >included too. I don't know of any HDF to GIF converters (although you might want to check out NCSA's XReformat - ftp.ncsa.uiuc.edu). Of course, you can't directly use RAW2GIF like Dap Hartmann does, but a combination of hdftor8 (supplied with the HDF library) and RAW2GIF seems like it might do the trick. -Doug Ilg Hughes STX dilg@ulabsgi.gsfc.nasa.gov From fox@vulcan.nrlssc.navy.mil Wed Jul 7 17:09:47 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["2635" "" "7" "July" "1993" "13:26:31" "-0500" "Dan Fox" "fox@vulcan.nrlssc.navy.mil " nil "55" "Image type conversion" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Organization: UTexas Mail-to-News Gateway NNTP-Posting-Host: cs.utexas.edu From: fox@vulcan.nrlssc.navy.mil (Dan Fox) Subject: Image type conversion Date: 7 Jul 1993 13:26:31 -0500 On the question of converting to and from HDF, GIF, and so on, the best set of image conversion tools I've ever used is available from the San Diego Super Computer Center. (anon ftp to ftp.sdsc.edu) The only problem is that (unless they've changed their policy), the actual _libraries_ of all the routines are not distributed in source code form. If the package wasn't so damned _good_, I wouldn't recommend it for that reason alone... The image types supported in the version we have here are (taken from the "imconv" man page): imconv supports the following image file formats: Format Names Primary Others Description _____________________________________________________________________ bmp - Microsoft Windows BitMaP file cur - Microsoft Windows CURsor file eps epi, epsf, epsi Adobe Encapsulated Postscript file gif giff CompuServe Graphics Image Format File hdf df, ncsa Hierarchical Data Format file ico - Microsoft Windows ICOn file icon cursor, pr Sun Icon and Cursor file iff vff, suniff, taac Sun TAAC Image File Format mpnt macp, pntg Apple Macintosh MacPaint file pbm - Portable Bit Map file pcx pcc ZSoft PC Paintbrush file pgm - Portable Grayscale Map file pic picio, pixar PIXAR PICture file pict pict2 Apple Macintosh QuickDraw/PICT picture file pix alias Alias PIXel image file pnm - Portable aNy Map file ppm - Portable Pixel Map file ps postscript PostScript image file ras sun, sr, scr Sun RASterfile rgb iris, sgi Silicon Graphics RGB image file rla rlb Wavefront raster image file rle - Utah Run-Length-Encoded image file rpbm - Raw Portable Bit Map file rpgm - Raw Portable Grayscale Map file rpnm - Raw Portable aNy Map file rppm - Raw Portable Pixel Map file synu - Synu image file tga vda, ivb Truevision Targa image file tiff tif Tagged Image File viff xv Khoros's Virtual Image File Format x avs AVS X image file xbm bm X11 Bit Map file xwd x11 X11 Window Dump image file From wilhelms@news.dlr.de Thu Jul 8 10:16:44 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["359" "Thu" "8" "July" "1993" "08:12:45" "GMT" "Hartmut Wilhelms" "wilhelms@news.dlr.de " nil "10" "Re: HDF to GIF conversion" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Reply-To: wilhelms@news.dlr.de Organization: DLR From: wilhelms@news.dlr.de (Hartmut Wilhelms) Subject: Re: HDF to GIF conversion Date: Thu, 8 Jul 1993 08:12:45 GMT Thanks to all who answered my question about an HDF to GIF converter. Actually the NCSA XReformat does only the opposite convertion so the image convertion tools from San Diego Supercomputing Center seems to be the right choice. --- __/|__ Hartmut Wilhelms /_/_/_/ German Remote Sensing Data Center |/ DLR wilhelms@dlrtcs.da.op.dlr.de From bschlesinger@nssdca.gsfc.nasa.gov Wed Jul 14 10:54:01 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["2987" "" "14" "July" "1993" "08:53" "EDT" "BARRY M. SCHLESINGER" "bschlesinger@nssdca.gsfc.nasa.gov " nil "67" "NOST FITS Definition Accredited as NOST Standard" "^From:" nil nil "7"]) Newsgroups: sci.astro.fits,sci.data.formats,sci.astro Organization: NASA - Goddard Space Flight Center Distribution: world NNTP-Posting-Host: nssdca.gsfc.nasa.gov Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit News-Software: VAX/VMS VNEWS 1.41 From: bschlesinger@nssdca.gsfc.nasa.gov (BARRY M. SCHLESINGER) Subject: NOST FITS Definition Accredited as NOST Standard Date: 14 Jul 1993 08:53 EDT The NSSDC's NASA/Science Office of Standards and Technology (NOST) has completed development of a formal definition of the Flexible Image Transport System (FITS) for the transfer of information, in support of the astronomical community. The standard was developed to provide a document that would remove contradictions and resolve ambiguities present in the four papers and Floating Point Agreement endorsed by the International Astronomical Union (IAU) as the basis for FITS, one that NASA projects and other researchers could use to design data sets in conformance to FITS. Working under NOST procedures, a FITS Technical Panel composed of astronomers and chaired by American Astronomical Society Working Group on Astronomical Software Chair R. J. Hanisch (STScI) developed a draft standard. In order to ensure that this standard accurately represented FITS as accepted by the astronomical community, there were three review cycles. In each cycle, the opportunity to review the standard was widely publicized throughout the astronomical community, with particular care taken to notify the international and regional FITS Committees. When the two-month review period was completed, the Panel reviewed the comments, revised the draft standard, and provided detailed replies to all reviewers explaining its action on each point. When the review process was complete, the Technical Panel proposed the standard to the NOST FITS Accreditation Panel for acceptance as a NOST standard. The Accreditation Panel, composed of the NOST executive board and an outside member from the astrophysics community, reviewed the process followed by the Technical Panel and their handling of reviewer comments. Approval of the NOST "Definition of the Flexible Image Transport System (FITS)" was unanimous and an outstanding effort by the Technical Panel was noted. This standard will now be submitted to the IAU FITS Working Group for endorsement as the international standard for FITS. Approval is believed likely. The NOST Standard, like the drafts before it, is available by anonymous ftp from nssdca.gsfc.nasa.gov, or by DECnet copy from NSSDCA, in the directory FITS, in LaTeX, PostScript, and flat ASCII forms. Get the AAREADME.DOC file for details. Printed copies can be obtained from the NOST Librarian, who can be reached as follows: (Postal) NASA/Science Office of Standards and Technology Code 633.2 Goddard Space Flight Center Greenbelt MD 20771 USA (Internet) nost@nssdca.gsfc.nasa.gov (DECnet) NCF::NOST Telephone: +1-301-286-3575 8 a. m. - 5 p. m., U. S. Eastern Time If the Librarian is unavailable, a phone mail system takes the call after four rings. If you have additional questions, the FITS office can be reached by electronic mail at the address below. for Donald Sawyer, NOST Secretary Barry M. Schlesinger Coordinator, FITS Support Office Secretary, Technical Panel. +1-301-513-1634 fits@nssdca.gsfc.nasa.gov NCF::FITS From A428ENDE@HASARA11.SARA.NL Thu Jul 15 22:48:35 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] [nil nil nil nil nil nil nil nil nil nil nil nil "^From:" nil nil nil]) Newsgroups: sci.data.formats Organization: S.A.R.A. Academic Computing Services Amsterdam Nntp-Posting-Host: vm1.sara.nl X-Newsreader: NNR/VM S_1.3.2 From: A428ENDE@HASARA11.SARA.NL Subject: Re: ILBM/LBM (subset of IFF) file format for pictures (Request) Date: Thu, 15 Jul 93 22:41:40 CET In article <1993Jul14.222949.27139@cc.umontreal.ca> pigeons@JSP.UMontreal.CA (Pigeon Steven) writes: > I'm looking for the file format (and compression scheme) used by some (most?) >of AMIGA art programs, including Deluxe Paint. Amiga extensions are usually >IFF and PC/others are LBM. Some one could give me a pointer to where I could >find that information? Buy the book: Graphics File Formats by D.C. Kay and J.R. Levine (Windcrest/McGr aw-Hill). It cost about $30 or so. It has a chapter about IFF/ILBM. Beside that it also contains information of more than 20 other formats! Good Luck, Henk van de Kamer From mbernar@erenj.com Mon Jul 19 14:01:21 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] [nil nil nil nil nil nil nil nil nil nil nil nil "^From:" nil nil nil]) Newsgroups: sci.data.formats Followup-To: sci.data.formats Nntp-Posting-Host: mbernarmac1.erenj.com Organization: Exxon Research & Engineering, Co. From: mbernar@erenj.com (Marcelino Bernardo) Subject: Re: DEC single precision format Date: Mon, 19 Jul 1993 16:12:51 GMT In article <39531@oasys.dt.navy.mil>, intolabb@oasys.dt.navy.mil (Steven Intolubbe) wrote: > > I am trying to make some DEC formatted single and double > precision floating point numbers usable on the Macintosh. > Can anyone tell me what the structure of DEC floating point > numbers is? > Here's a code I wrote to convert single precision DEC (real*4) to Mac IEEE 488 (float). /* afloat = FixVaxFloat(arealnum, dec2Mac) Fixes Dec's float format to IEEE float by Swapping bytes 0 and 1, and 2 and 3 of a float and dividing by 4. */ void FixVaxFloat(float *avaxnum, Boolean dec2Mac) { char tmpchr; char *charPtr; charPtr = (char *) avaxnum; if (!dec2Mac) *avaxnum *= 4; /* this is a conversion from Mac to Dec */ tmpchr = *charPtr; *charPtr = *(charPtr+1); *(charPtr+1) = tmpchr; tmpchr = *(charPtr+2); *(charPtr+2) = *(charPtr+3); *(charPtr+3) = tmpchr; if (dec2Mac) *avaxnum /= 4; /* this is a conversion from Dec to Mac */ } Marcelino Bernardo /* Views expressed are mine, not Exxon's. */ mbernar@erenj.com From bschlesinger@nssdca.gsfc.nasa.gov Mon Jul 19 19:53:57 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["1844" "" "19" "July" "1993" "15:38" "EDT" "BARRY M. SCHLESINGER" "bschlesinger@nssdca.gsfc.nasa.gov " nil "44" "Re: Requested use of this group for standards efforts" "^From:" nil nil "7"]) Newsgroups: comp.arch.storage,sci.data.formats Organization: NASA - Goddard Space Flight Center Distribution: world NNTP-Posting-Host: nssdca.gsfc.nasa.gov Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Keywords: SIDF STANDARD REQUEST News-Software: VAX/VMS VNEWS 1.41 From: bschlesinger@nssdca.gsfc.nasa.gov (BARRY M. SCHLESINGER) Subject: Re: Requested use of this group for standards efforts Date: 19 Jul 1993 15:38 EDT In article , mwiseman@novell.com (Monty Wiseman) writes... >A group of backup\archive vendors has formed a committee to >create an industry standard media format for backup\archival >data. Its name is the SIDF Committee and the name of the >standard is SIDF (System Independent Data Format). This is a >non-profit organization and it is intended to be an open >standard. We will submit our final product to existing standards >organizations (e.g., ANSI, ECMA, ISO, etc.) when we feel it is >ready. I am the chairman of the Technical sub-Committee of this >organization. > >The committee has been meeting for a little under one year and >have found that we need an electronic forum in which to exchange >ideas. We are looking for two things from a News group: 1> A home >and 2> a place to seek more industry\academic participation. >Membership and participation is open to ANY organization! > >After some research, I have found that this News group seems to >be the most suited for the needs of this effort. I will propose >that all postings regarding this effort contain the string "SIDF" >in the subject field so those monitoring this group can "kill" >the threads if desired. Also, those only interested in this >effort can include only SIDF threads. > >I will wait for a few days to get responses from this group >before starting postings. Please contact me if you have any >questions. > >Monty Wiseman >SIDF Technical Sub-Committee Chairman >Novell, Inc. >122 East 1700 South >Provo, UT 84606 >801/429-3517 >monty_wiseman@novell.com (Entire post included for benefit of sci.data.formats.) Since this discussion is about data formats, I suggest that sci.data.formats is the appropriate group. Barry Schlesinger NSSDC/NOST FITS Support Office (affiliation for identification purposes only) From briand@xongmao.ncsa.uiuc.edu Mon Jul 19 22:56:04 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["765" "Thu" "15" "July" "1993" "20:29:01" "GMT" "Briand T. Sanderson" "briand@xongmao.ncsa.uiuc.edu " nil "19" "C++ interface for HDF" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Organization: University of Illinois at Urbana From: briand@xongmao.ncsa.uiuc.edu (Briand T. Sanderson) Subject: C++ interface for HDF Date: Thu, 15 Jul 1993 20:29:01 GMT I am seeking a few volunteers to help alpha-test an object oriented, C++ interface for HDF which includes a persistant object store. Currently the interface supports 8-bit raster image objects and Scientific Datasets from HDF 3.2r5 and before. Although the interface is still in the design stages, we feel it would be of great benefit to get feedback from some users at this point in it's development. We currently do have some limitations on the platform used for development. Currently development is on a SUN4, therefore applicants should be using a SUN4 computer. The C++ compiler being used is CC, but we may be able to work with g++ users. Anyone interested is encouraged to contact me at briand@ncsa.uiuc.edu. Briand Sanderson NCSA HDF Rsrch Programmer From koziol@void.ncsa.uiuc.edu Mon Jul 19 22:56:17 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["682" "Sat" "17" "July" "1993" "21:56:05" "GMT" "Quincey Koziol" "koziol@void.ncsa.uiuc.edu " nil "18" "Re: DEC single precision format" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Organization: Nat'l Center for Supercomputing Applications From: koziol@void.ncsa.uiuc.edu (Quincey Koziol) Subject: Re: DEC single precision format Date: Sat, 17 Jul 1993 21:56:05 GMT In article <39531@oasys.dt.navy.mil> intolabb@oasys.dt.navy.mil (Steven Intolubbe) writes: >I am trying to make some DEC formatted single and double >precision floating point numbers usable on the Macintosh. >Can anyone tell me what the structure of DEC floating point >numbers is? > > Thanks, > Steve Take a look at the number conversion routines in NCSA's HDF library. They can convert from any of six different machine architechtures to little and big-endian IEEE floating point numbers (the Mac is a big-endian IEEE floating point machine). HDF is located on: ftp.ncsa.uiuc.edu, in the /HDF directory. Quincey Koziol HDF Developer koziol@ncsa.uiuc.edu From jms@tardis.Tymnet.COM Tue Jul 20 09:55:49 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["1598" "" "20" "July" "93" "01:13:31" "GMT" "Joe Smith" "jms@tardis.Tymnet.COM " nil "30" "Re: DEC single precision format" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Summary: 36-bits? Organization: BT Tymnet, San Jose, CA From: jms@tardis.Tymnet.COM (Joe Smith) Subject: Re: DEC single precision format Date: 20 Jul 93 01:13:31 GMT In article <39531@oasys.dt.navy.mil> intolabb@oasys.dt.navy.mil (Steven Intolubbe) writes: >I am trying to make some DEC formatted single and double precision floating >point numbers usable on the Macintosh. Can anyone tell me what the >structure of DEC floating point numbers is? "Ambiguous question; insufficient data." Single precision numbers have the sign bit in bit 0, an excess-128 exponent in bits 1-8, and a 27-bit mantisa in bits 9 through 35, for a range of 1.5E-39 to 1.7E+38. (The high-order bit of the mantissa is always on, except for unnormalized numbers or zero, which means that same instructions can be used to compare integer and floating point numbers.) Double precision numbers use a second 36-bit word, in which bit 0 is always zero and bits 1-35 extend the mantissa to 62 bits (over 18 decimal digits). For KL10 systems running microcode version 271 or greater, the extended-range "G format" numbers use bits 1-11 for an excess 1024 exponent, bits 12-35 and 1-35 for a 59-bit mantissa, and have range of 2.8E-308 to 9.0E+309. DEC's lesser powered computers have formats that differ from the PDP-10. You'll have to ask someone else about the PDP-11 "F" and "D" formats, or the VAX "G" format. Not to mention the IEEE format that DEC workstations use. -- Joe Smith (408)922-6220 BTNA GNS Major Programs, TYMNET Global Network P.O. Box 49019, MS-C51, San Jose, CA 95161-9019 CA license plate: "POPJ P," Married to the LB, Quantum Leap's #1 net.fan PDP-10, 36-bits forever! Humorous disclaimer: "My Amiga 3000 speaks for me." From pgf@space.mit.edu Tue Jul 20 09:56:21 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["2290" "" "20" "July" "1993" "04:59:52" "GMT" "Peter G. Ford" "pgf@space.mit.edu " nil "51" "Re: DEC single precision format" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Organization: MIT Center for Space Research Reply-To: pgf@space.mit.edu NNTP-Posting-Host: mgn1.mit.edu From: pgf@space.mit.edu (Peter G. Ford) Subject: Re: DEC single precision format Date: 20 Jul 1993 04:59:52 GMT In article <39531@oasys.dt.navy.mil> intolabb@oasys.dt.navy.mil (Steven Intolubbe) writes: >I am trying to make some DEC formatted single and double precision floating >point numbers usable on the Macintosh. Can anyone tell me what the >structure of DEC floating point numbers is? The most usual DEC float formats are the VAX F- and D-type fields, but there are several others. Here's the information on F and D: 4-byte (float) F-type: 0 1 8 9 31 <-- bit number +-+-------+-+-------+---------+---------+ | | [1] | | [0] | [3] | [2] | <-- byte number +-+-------+-+-------+---------+---------+ 8-byte (double) D-type: 0 1 8 9 31 <-- bit number +-+-------+-+-------+---------+---------+ | | [1] | | [0] | [3] | [2] | <-- byte number +-+-------+-+-------+---------+---------+ 32 63 <-- bit number (contd) +---------+---------+---------+---------+ | [5] | [4] | [7] | [6] | <-- byte number (contd) +---------+---------+---------+---------+ VAX single- (double-) precision F-type (D-type) floating point numbers are stored in four (eight) consecutive bytes. Bit number 0 contains a sign indicator, S. Bits 1 through 8 contain a binary exponent, E. The significance increases from bit 8 through bit 1. Bits 9 through 31 (63) contain a mantissa M, a 23-bit (55-bit) binary fraction whose binary point lies immediately to the left of bit 9. The significance increases from bit 31 (63) through bit 9. The value of the field is given by S E-129 (-1) * 2 * (1+M) The numbers are stored externally in increasing byte-number order, i.e. [0], [1], etc. For example, the float value +1.0 is stored as four bytes valued 0x80, 0x40, 0x00, 0x00. NOTE: when comparing this definition with some that have been posted in response to your thread, you should be warned about some replies from DEC users, since they often forget that their manuals gloss over the distinction between bit and byte order on big-endian machines; for instance, the sign bit (bit 0) is located in the SECOND byte of the 4- or 8-byte field. Peter Ford From pturner@amb4.ccalmr.ogi.edu Thu Jul 22 19:49:03 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["6175" "" "22" "July" "93" "21:56:50" "GMT" "Paul J Turner" "pturner@amb4.ccalmr.ogi.edu " nil "143" "netCDF problems and a proposed solution" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Summary: Need object oriented scientific data formats Keywords: netCDF, data formats, object Followup-To: sci.data.formats Distribution: world Organization: Center for Coastal and Land-Margin Research Originator: pturner@amb4.ccalmr.ogi.edu From: pturner@amb4.ccalmr.ogi.edu (Paul J Turner) Subject: netCDF problems and a proposed solution Date: 22 Jul 93 21:56:50 GMT For several months, I've been trying to sell management on the use of netCDF as a transportable file format for our applications (I've looked at HDF also). While my initial enthusiasm was strong, I'm now having problems selling the notion to myself. The problems I'm having are: 1. Can't read/write from/to stdin/stdout. A real problem, as most of our visualization codes rely on opening the model generating the data as a pipe. The amount of data is often too large and too time consuming to generate to go through the hassle of writing the file completely before viewing it. We want our data in semi-realtime. This is a killer, it forces me to write io for both native binary and netCDF. 2. Little support for other than multidimensional arrays. We use finite elements, 1d, 2d, 3d with a variety of element types. We also have polygonal data, scalar fields, flows, pathlines, etc.. While these data may be described using netCDF, there are no standards (that I know of). 3. Ncgen is a nice tool, and was one of my selling points to management, but it doesn't go far enough. To be really useful, it should not only be able to generate source to define the netCDF file, but also to read and write the file, in a generic fashion suitable to plug into any one's code. Some of the data structures we work with are really big and contain lots of other data structures - writing io for these things would be a major task, and I see no reason why ncgen shouldn't be able to generate the code to do it. 4. CDL does not allow derived types to be easily defined. What I'd like to have is something like a C typedef (or a C++ class), that allows previously defined CDL descriptions to be reused. 5. netCDF files seem bloated and slow to read/write. A test I performed yesterday on a small file: The ASCII, uncompressed version .... 18,397,806 bytes Compressed ASCII .... 4,421,421 bytes Uncompressed binary .... 11,933,712 bytes Compressed binary .... 4,311,315 bytes netCDF .... 20+ MB (got tired of waiting so I killed it) The binary file above can be read in 26 secs (one 20+ byte record at a time), I didn't have the time to wait for the netCDF test to conclude. The basic problem is that netCDF is a flat file data format and we work with objects. I'm considering the following solution and I solicit your comments - if this has already been done, please let me know: * Fundamental types char, int, float, double, etc. maybe typedef'ed to things like Int16, Int32, Float32, Float64, etc. * In addition to code generation to read/write binary, generate code to read/write ASCII if necessary (in case none of this flies). * Add a flag for byte order. In general, allow some method of attaching meta-data (which would also be an object). All data to be written in the machine's native byte order, IEEE floating point. * If random access is needed, generate an object index. * For each object definition, allow for a version number. If the object definition changes, increment the version number so support codes are aware of the difference. * Use a C language typedef as the object description (toss it in with the data). Write a parser that interprets the typedef and generates the read/write routines (did this, took about an hour). As with a typedef, the parser understands derived types, so with typedef struct _Element Element; typedef struct _Grid { int nelements; /* number of elements */ int nnodes; /* number of nodes */ FloatArray x, y, z; /* table of nodes */ Element *elements; /* table of elements */ etc... } Grid; I can derive a new class, GridT (adaptive FE grids with a structure that changes in time): typedef struct _GridT { int nsteps; /* the number of grids */ FloatArray time; /* at these times */ Grid *grids; /* the grid at each time step */ etc... } GridT; Now I throw this at the parser and it generates C (or Fortran) to read and write these objects. The data description is simply #include'ed in my code, where needed. Previously generated code to write Grids can be reused to write GridTs. Another example: A 3d hybrid grid - a 2d grid on the surface with 1d grids dangling from the nodes. Here is a definition: typedef struct _Grid3D { Grid grid2d; /* the 2d grid */ Grid *grid1d; /* a dynamic array of 1d grids, each attached to /* a node in the 2d grid - already described by /* the Grid structure, the number of nodes is /* specified in the 2d grid. */ } Grid3D; etc. A further enhancement would be to toss in a metafile-like description of how to display the object (or perform other operations) if not already known. * An object registry, if someone has a new object, either derived or new, it is submitted for approval, then whatever support is needed added to the necessary codes. Each object is assigned a magic number, with a set reserved for officially approved objects, and a set for user defined objects. * Use of standard Fortran/C io (Fortran open/read/write/close and C fopen/fread/fwrite/fclose). * What am I leaving out? I just received a trial version of IBM's Data Explorer, and while having better support for objects than netCDF, it is limited in the data types it can handle (e.g. no 6 node subparametric triangles). There seems to be no code generation capability (i.e. the ability to generate io routines using my names) for the data types it does support. No support for other than 4 byte reals. An idiosyncratic data description language. I've no doubt it does great graphics. Is there anything out there that does what I've outlined above? I need bindings for C and Fortran (C++ would be neat also). Automatic code generation is essential. --Paul Paul J Turner Center for Coastal and Land-Margin Research Oregon Graduate Institute 20000 NW Walker Road Portland, OR 97291-1000 From ethan@earl.scd.ucar.edu Sun Jul 25 23:38:09 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["5541" "Fri" "23" "July" "1993" "21:31:30" "GMT" "Ethan Alpert" "ethan@earl.scd.ucar.edu " nil "110" "Re: netCDF problems and a proposed solution" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Keywords: netCDF, data formats, object Organization: Scientific Computing Divison/NCAR Boulder, CO From: ethan@earl.scd.ucar.edu (Ethan Alpert) Subject: Re: netCDF problems and a proposed solution Date: Fri, 23 Jul 1993 21:31:30 GMT In article <56706@ogicse.ogi.edu> pturner@amb4.ccalmr.ogi.edu (Paul J Turner) writes: > >1. Can't read/write from/to stdin/stdout. A real problem, as most of our > visualization codes rely on opening the model generating the data as > a pipe. The amount of data is often too large and too time consuming > to generate to go through the hassle of writing the file completely > before viewing it. We want our data in semi-realtime. This is a > killer, it forces me to write io for both native binary and netCDF. This is not a netCDF problem it is a UNIX stream IO problem. If you can figure out how to support random access to UNIX streams you'd be able to read and write from stdin/stdout with the netCDF API. > >2. Little support for other than multidimensional arrays. We use finite > elements, 1d, 2d, 3d with a variety of element types. We also have > polygonal data, scalar fields, flows, pathlines, etc.. While these data > may be described using netCDF, there are no standards (that I know of). The netCDF mailing list touched on this several months ago with respect to standardizing data representations but ran into problems agreeing on what needs to be standardized. The application programmer types, like myself, were leaning towards the type of solutions you mention. Basicly this structural information is the type of information a netcdf file needs to comunicate to a general visualization tool or application for the application to be able to manipulate the data. Take a look at the AVS field file format. It communcates the structure and organization of the data to AVS so AVS can do something meaningful with it. However, this is not the kind of information needed to communicate the contents of the file to a scientist. In fact several scientist suggested that these types of conventions would make writing data so esoteric that it would not be convenient to write data in netCDF. The scientist in most cases just wanted to standardize the attribute and variable names within each discipline. The advantage of this with respect to interpreting data from a human perspective is obvious, however it does nothing to support interpreting data >from an analysis and visualization tool. In the end the thread fizzled and nothing seems to have become of the suggestions. Basicly this topic could be a thread all to itself. What should the term "self-describing format" mean? To whom should a self-describing file format describe itself and what types of conventions are needed. >3. Ncgen is a nice tool, and was one of my selling points to management, > but it doesn't go far enough. To be really useful, it should not only > be able to generate source to define the netCDF file, but also to read > and write the file, in a generic fashion suitable to plug into any > one's code. Could you elaborate on exactly what you mean here? You can generate source to read the entire file and write the entire file, but if this is all you want out of the netCDF interface your missing the point. I think the main selling point beyond portability is the random access that the netCDF interface provides to the individual variables and metadata. You don't have to read the entire file into memory to get access to a small portion of it. > Some of the data structures we work with are really big > and contain lots of other data structures - writing io for these things > would be a major task, and I see no reason why ncgen shouldn't be able > to generate the code to do it. Once again could you elaborate. Automatically generating the source to read files that contain "lots of other data structures" doesn't on the surface sound trivial. There is also the issue of what you are reading them into. If you are reading into C data structures how are these defined and comunicated to the ncgen type tool you allude to. > >4. CDL does not allow derived types to be easily defined. What I'd like to > have is something like a C typedef (or a C++ class), that allows > previously defined CDL descriptions to be reused. This is intimately connected with number 2. > >5. netCDF files seem bloated and slow to read/write. A test I performed > yesterday on a small file: > > The ASCII, uncompressed version .... 18,397,806 bytes > Compressed ASCII .... 4,421,421 bytes > Uncompressed binary .... 11,933,712 bytes > Compressed binary .... 4,311,315 bytes > netCDF .... 20+ MB (got tired of > waiting so I killed it) What are you doing here writing a netCDF file full of doubles? If your ascii representation is seven digits or less then you only need floats. I've done the same types of tests you mention and have never found the netCDF file to be more than 5% larger that the **equivalent binary representation**. If your binary represenation uses floats and you write it to a double variable you should expect a 20+ Mb file regardless of what format you choose. I'm sorry I don't have any solutions for you but I would like to see some of the issues in the thread discussed more in sci.data.formats. -ethan -- Ethan Alpert internet: ethan@ncar.ucar.edu | Standard Disclaimer: Scientific Visualization Group, | I represent myself only. Scientific Computing Division |------------------------------- National Center for Atmospheric Research, PO BOX 3000, Boulder Co, 80307-3000 From russ@unidata.ucar.edu Sun Jul 25 23:41:39 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["6757" "Fri" "23" "July" "1993" "22:42:12" "GMT" "Russ Rew" "russ@unidata.ucar.edu " nil "123" "Re: netCDF problems and a proposed solution" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Keywords: netCDF, data formats, object Organization: University Corporation for Atmospheric Research (UCAR) From: russ@unidata.ucar.edu (Russ Rew) Subject: Re: netCDF problems and a proposed solution Date: Fri, 23 Jul 1993 22:42:12 GMT pturner@amb4.ccalmr.ogi.edu (Paul J Turner) writes: > For several months, I've been trying to sell management on the use > of netCDF as a transportable file format for our applications (I've > looked at HDF also). While my initial enthusiasm was strong, I'm now > having problems selling the notion to myself. The problems I'm > having are: > > 1. Can't read/write from/to stdin/stdout. A real problem, as most of our > visualization codes rely on opening the model generating the data as > a pipe. The amount of data is often too large and too time consuming > to generate to go through the hassle of writing the file completely > before viewing it. We want our data in semi-realtime. This is a > killer, it forces me to write io for both native binary and netCDF. There are tradeoffs between sequential I/O and direct-access I/O. If you want to be able to access small amounts of data efficiently from large files without reading through all the preceding data first, or to access data in a different order from the order in which it was written, you must use direct-access I/O implemented with something like lseek(2). You can't seek on a pipe, so this choice precludes reading/writing from/to stdin/stdout. On the other hand, if you usually want to read through all of the data in a dataset in the order in which it was written and only rarely extract or update small subsets of the data in larger collections, you may not need direct-access I/O. In that case, you may use sequential I/O and get the advantages of pipes. This also permits you to access data directly from sequential devices like tapes without staging to or from disk first, which may be an advantage for some kinds of data processing. NetCDF uses the direct-access I/O model because it was designed for visualization applications and data analysis applications that typically require access to data in an unanticipated sequence or to small subsets of large datasets. > 2. Little support for other than multidimensional arrays. We use finite > elements, 1d, 2d, 3d with a variety of element types. We also have > polygonal data, scalar fields, flows, pathlines, etc.. While these data > may be described using netCDF, there are no standards (that I know of). Some conventions are evolving for netCDF representations, but the conventions differ from discipline to discipline, and it would be premature to standardize on the conventions developed in any one area. It takes time to discover the best ways to represent data for particular patterns of access, and the patterns of access often change in unanticipated ways as a result of making the data available. > 3. Ncgen is a nice tool, and was one of my selling points to management, > but it doesn't go far enough. To be really useful, it should not only > be able to generate source to define the netCDF file, but also to read > and write the file, in a generic fashion suitable to plug into any > one's code. Some of the data structures we work with are really big > and contain lots of other data structures - writing io for these things > would be a major task, and I see no reason why ncgen shouldn't be able > to generate the code to do it. These are good ideas for enhancements to ncgen, some of which have been on our list for some time. A lack of resources for netCDF development has led to this list growing rather than shrinking over time. > 4. CDL does not allow derived types to be easily defined. What I'd like to > have is something like a C typedef (or a C++ class), that allows > previously defined CDL descriptions to be reused. There is a proposal for an inheritance hierarchy of netCDF file classes that would use a single global attribute "Class" to do what you want. The idea is that a global attribute, say "_Class", would be used to specify a class hierarchy for netCDF files (and hance the CDL descriptions). For example, :_Class = "Foo.Bar.Baz"; could be used to say that this netCDF file follows all the conventions of Foo files (so Foo-specific applications may be used on it), and also follows the conventions of the Bar subclass of Foo files, and the Baz subclass of Foo.Bar files. This would be enough to specify a colection of required dimensions, variables, and attributes in the file, but additional components could also be added by specifying them. A place to register or look up class hierarchy conventions would be required, and different disciplines or organizations would need to maintain their own data class specifications. Something like this is already happening in the oceanographic community. > 5. netCDF files seem bloated and slow to read/write. A test I performed > yesterday on a small file: > > The ASCII, uncompressed version .... 18,397,806 bytes > Compressed ASCII .... 4,421,421 bytes > Uncompressed binary .... 11,933,712 bytes > Compressed binary .... 4,311,315 bytes > netCDF .... 20+ MB (got tired of > waiting so I killed it) > > The binary file above can be read in 26 secs (one 20+ byte record at > a time), I didn't have the time to wait for the netCDF test to > conclude. Performance varies greatly depending on platform, data types, and how you structure and access the data. In some applications the storage overhead over binary files is insignificant, and the access overhead is small. The benefits of machine-independent data will always require some overhead. Machine-independent and application-specific data formats will always outperform platform-independent and general-purpose data access interfaces. > The basic problem is that netCDF is a flat file data format and we work > with objects. I'm considering the following solution and I solicit your > comments - if this has already been done, please let me know: ... [long description of requirements omitted for brevity] > Is there anything out there that does what I've outlined above? I need > bindings for C and Fortran (C++ would be neat also). Automatic code > generation is essential. You should look into PDBlib (Portable Database Library) from Stewart Brown of LLNL. It sounds more closely matched to your requirements than netCDF. It can be anonomous-FTP'd from phoenix.ocf.llnl.gov, and is part of the "PACT" distribution (Stewart Brown's environment for writing simulation codes), which is contained in the file pub/pact7_23_93.tar.Z. -- ________________________________________________________________________ Russ Rew Unidata Program Center russ@unidata.ucar.edu UCAR, PO Box 3000 Boulder, CO 80307-3000 From grisebac@isoit109.BBN.HP.COM Wed Jul 28 20:05:27 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["870" "Wed" "28" "July" "1993" "06:01:12" "GMT" "#Eberhard Grisebach" "grisebac@isoit109.BBN.HP.COM " nil "16" "Re: TIFF File format" "^From:" nil nil "7"]) Organization: Hewlett-Packard GmbH Newsgroups: sci.data.formats From: grisebac@isoit109.BBN.HP.COM (#Eberhard Grisebach) Subject: Re: TIFF File format Date: Wed, 28 Jul 1993 06:01:12 GMT The description of TIFF Revision 6.0 is about 120 pages!!! What exactly do you want? +---------------------------------------------------------------------------+ |Eberhard Grisebach | +---------------------------------------------------------------------------+ |HEWLETT-PACKARD GmbH | Internet: eha@hpbbn.bbn.hp.com | |Herrenberger Strasse 130 | | |71034 Boeblingen | | |Germany | | |Phone (Germany) 07031/14-3497 | telnet : 778-3497 | |FAX (Germany) 07031/14-3924 | | +---------------------------------------------------------------------------+ From walsteyn@fys.ruu.nl Sun Aug 1 12:33:53 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["1510" "Fri" "30" "July" "1993" "09:42:24" "GMT" "Fred Walsteijn" "walsteyn@fys.ruu.nl " nil "34" "Re: HDF as an archive format" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Organization: Physics Department, University of Utrecht, The Netherlands From: walsteyn@fys.ruu.nl (Fred Walsteijn) Subject: Re: HDF as an archive format Date: Fri, 30 Jul 1993 09:42:24 GMT In dilg@xongmao.ncsa.uiuc.edu (Doug Ilg) writes: >mike@jpl-devvax.jpl.nasa.gov (Mike Tankenson) writes: >>2) Is HDF software so volatile that one cannot depend on it for something >>as critical as data archive? >No. HDF continues to evolve, as does any good piece of software, but it >maintains backward compatibility with all older versions. The worst case >is that, in the future, there may be a conversion program needed to update >old files to new libraries, although that hasn't been necessary to date. >>3) Are HDF versions backward compatible? If not, will they be at some >>point in the future (i.e., will HDF settle down)? >See 2) The files are certainly backward compatible, but the function calls (required in your C or Fortran program) are not. Over the past few years NCSA has changed their function prototypes significantly many times. That might be a disadvantage for you (it certainly is for me...). Hope this helps, Fred (PhD student) _____________________________________________________________________________ Fred H. Walsteijn | Institute for Marine and Atmospheric Research | Utrecht University | Princetonplein 5 | Internet: walsteyn@fys.ruu.nl 3584 CC Utrecht | FAX: 31-30-543163 The Netherlands | Phone: 31-30-533169 _____________________________________________________________________________ From warnock@Hypatia.gsfc.nasa.gov Sun Aug 1 12:35:32 1993 Status: RO X-VM-v5-Data: ([nil nil nil nil nil nil nil nil nil] ["2344" "" "30" "July" "93" "14:30:30" "GMT" "Archie Warnock" "warnock@Hypatia.gsfc.nasa.gov " nil "50" "Re: HDF as an archive format" "^From:" nil nil "7"]) Newsgroups: sci.data.formats Organization: NASA Goddard Space Flight Center -- InterNetNews site NNTP-Posting-Host: hypatia.gsfc.nasa.gov Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit From: warnock@Hypatia.gsfc.nasa.gov (Archie Warnock) Subject: Re: HDF as an archive format Date: 30 Jul 93 14:30:30 GMT mike@jpl-devvax.jpl.nasa.gov (Mike Tankenson) writes: >1) Is there something inherent about HDF that would prevent its use as an >archive format? Depends on how serious you are about archiving. The fundamental question to ask is how will a user, some 20 or 30 years in the future, upon finding a file in HDF format, access the data. In my opinion, formats which are defined _solely_ by software toolboxes and APIs are extremely dangerous for archival storage. We have no way of assuring future generations of users that the software will be usable. The questions to ask about archival storage are: 1. Where is the byte-by-byte description of the format given? In an internationally recognized standards document? In the refereed literature? In a private publication by some individual institution? In source code? Which is most likely to be accessible 50 years from now? 2. Is the format self-documenting? Is the metadata in (at least) ASCII? Can the user determine anything about the contents of the data >from a simple dump? 3. Is the format well-defined? Can a user assume that versions are backward compatible? There are (at least) three different applications for data formats - interchange, access and archival storage. It is not at all obvious that any single format can meet the needs of all 3 simultaneously, even less obvious that any single format currently _does_ meet the needs of all 3. In general, working formats need to emphasize access (read/write) efficiency. Interchange formats need storage efficiency and standardized numeric representations. Archival formats need to be self-documenting, externally documented and simple. Things like HDF, netCDF and CDF are good working formats but are risky as archival formats (at least, I've never seen references to format descriptions in the refereed literature - I hope I'm wrong, cause otherwise EOSDIS is gonna be in a lot of trouble years from now). FITS and PDS are pretty good interchange and archival formats, but weak working formats. It's an important area for discussion... -- _______________________________________________________________________ -- Archie Warnock Internet: warnock@hypatia.gsfc.nasa.gov -- Hughes STX "Unix --- JCL For The 90s" -- NASA/GSFC Project STELAR: WAIS to do science