From russ@unidata.ucar.edu Thu Jan 5 13:25:25 1995 Path: solitaire.cv.nrao.edu!hearst.acc.Virginia.EDU!caen!newsxfer.itd.umich.edu!ncar!ncar.ucar.edu!russ From: russ@unidata.ucar.edu (Russ Rew) Newsgroups: sci.data.formats Subject: Re: two questions concerning NetCDF Date: 05 Jan 1995 17:38:51 GMT Organization: UCAR Unidata Program Lines: 71 Message-ID: References: <1995Jan4.144111.13433@ircam.fr> NNTP-Posting-Host: buddy.unidata.ucar.edu In-reply-to: eckel@ircam.fr's message of Wed, 4 Jan 95 14:41:11 GMT In article <1995Jan4.144111.13433@ircam.fr> eckel@ircam.fr (Gerhard Eckel) writes: > We are in the process of evaluating netCDF. Our institute is specialized > in musical applications and our main usage of netCDF would be storage and > retreival of sound signal analysis data and sound synthesis control > data. We are currently using home-brew formats. Compared to our own > formats we found two problems with netCDF: > > 1) We used to pass data over UNIX pipes from one program to another to > avoid writing intermediate data files. As far as we can see up to now > netCDF doesn't support this type of data exchange (stream oriented > rather than file oriented). What are the reasons for this apparent > limitation? Will there be extensions of netCDF which will allow for > this feature? Could we add this extension ourselves and make it > available to UNIDATA such that it will be supported in the future? The inability to use a "pipes and filters" approach directly with netCDF was intentional to get the corresponding benefits of an engineering trade-off between direct access and sequential access. Direct access permits efficient access to a small subset of a large dataset without reading through all the preceding data. Sequential access permits simple connection via pipes, but is inefficient for accessing small amounts of data from a large file. A feature of the first release of the netCDF operators we developed is that they can read an input netCDF data set from standard input and can also write an output netCDF data set to standard output. Thus, these operators can be used in UNIX pipelines (although this is only recommended for small data sets). This was implemented on input by copying standard input to a temporary file and opening it as a netCDF file, and similarly on output by copying a temporary output file to standard output. A similar approach can be used with any program that deals with netCDF data, providing the advantages of direct access when file names are provided for input or output, but using standard input or output via transparent copying otherwise. > 2) We use variable record sizes for many of our analyses data. A maximum > record size is hard to estimate before the analysis is started and the > record size cannot be changed efficiently during the analysis > process. A maximum record size can also waste much disk space. Do other > people have this problem with netCDF? Are there work-arounds? NetCDF only permits one unlimited dimension, so if this is used as the "record" dimension, you can't have variable record sizes. The reason for this limitation is again an engineering trade-off, to provide efficient access to cross-sections of data. We know of no implementation that permits multiple unlimited dimensions, efficient access to orthogonal cross-sections of data, and the ability to later append data along any of the unlimited dimensions. We have investigated ways to remove the restriction on a single unlimited dimension, but these appear to require adding other new restrictions to netCDF data access. For example, by restricting the writing of multidimensional arrays to occur in the same order as they are stored, rather than in arbitrary order as is now allowed, multiple unlimited dimensions might be (weakly) supported. Another approach requires garbage collection, to recover unused locations so that a file doesn't grow quadratically in size as a single vector is extended linearly along an unlimited dimension. A workaround when you need multiple unlimited dimensions in a single file is to use multiple files with separate unlimited dimensions in each, but this only works if no single variable uses more than one of the desired unlimited dimensions. > Gerhard Eckel > IRCAM, Centre Georges Pompidou > 1, place Igor Stravinksy > F-75004 Paris / France > eckel@ircam.fr From stern@amath.washington.edu (L.G.Stern) Mon Jan 9 16:02:23 1995 Path: solitaire.cv.nrao.edu!hearst.acc.Virginia.EDU!concert!gatech!howland.reston.ans.net!vixen.cso.uiuc.edu!uwm.edu!reuter.cse.ogi.edu!netnews.nwnet.net!news.u.washington.edu!news.u.washington.edu!stern From: stern@amath.washington.edu (L.G. "Ted" Stern) Newsgroups: sci.data.formats Subject: [SUMMARY] FEM stuff Date: 09 Jan 1995 19:47:07 GMT Organization: University of Washington, Applied Math Dept. Lines: 42 Distribution: world Message-ID: NNTP-Posting-Host: omak.amath.washington.edu I posted a message a few weeks ago asking for information about data formats that would be suitable for FEM-type info and materials databases. I looked around a bit more, got some replies, and here is a summary: 1) IBM has a tool called Data Explorer (DX) with a published format that supports a lot of what I was looking for. DX also has Lloyd Treinish working on it (see the CDF, netCDF, and HDF bibliographies). Unfortunately, there is not yet complete implementation of Treinish's and DX's formats in a PD package, though HDF supports some of the features. Here are some DX URL's: http://www-i.almaden.ibm.com/dx/ http://www.tc.cornell.edu/Visualization/vis.html 2) HDF suppports some unstructured formats. Disadvantage (to me) is that I haven't seen sufficient programming examples for C or Fortran. I would very much appreciate seeing some HDF examples supporting data on unstructured grids, in C, C++, or Fortran (hopefully all 3). I'd also like to see how people get their data into AVS or PV-WAVE (the local vis. tools I have to work with). 3) netCDF can theoretically be pushed into supporting unstructured grids with home-brewed constructs. Lloyd Treinish sent me a document about how DX handles this. I will forward it to anyone who's interested. I would also be interested in seeing how other people do this. 4) For materials databases I was referred to STEP. Here are some URL's: gopher://elib.cme.nist.gov/11/step gopher://elib.cme.nist.gov/11/step/part104 gopher://elib.cme.nist.gov/11/step/part45 http://www.igd.fhg.de/www/igd-a2/hyperstep/hyperstep-home.html Hope this helps somebody, -- -- Ted ============================================================================ Ted Stern (206) 685-8068 Dept. of Applied Math, FS-20 stern@amath.washington.edu University of Washington http://www.amath.washington.edu/~stern/ Seattle, WA 98195 ============================================================================ From dilg@fishery.hitc.com Tue Jan 10 17:53:47 1995 Path: solitaire.cv.nrao.edu!hearst.acc.Virginia.EDU!concert!gatech!swrinde!elroy.jpl.nasa.gov!ames!newsfeed.gsfc.nasa.gov!newsroom.gsfc.nasa.gov!fishery.hitc.com!dilg From: dilg@fishery.hitc.com () Newsgroups: sci.data.formats Subject: Re: What is the best neutral format for CFD data Date: 10 Jan 1995 16:20:11 GMT Organization: Applied Reseach Corporation Lines: 23 Distribution: world Message-ID: <3euc3r$l2c@newsroom.gsfc.nasa.gov> References: <3eta89$4u0@agate.berkeley.edu> Reply-To: dbuto@eos.hitc.com NNTP-Posting-Host: fishery.hitc.com In article <3eta89$4u0@agate.berkeley.edu>, faustus@remarque.berkeley.edu (Wayne A. Christopher) writes: |> I need to define an input data format for some CFD visualization |> software, which should have all the usual good properties: flexible, |> extensible, efficient, lots of conversion tools available, etc. |> I need to be able to store both structured and unstructured result |> data in this format, and it should be easy to write converters |> from other formats whenever they don't already exist. Some people |> in sci.physics.computational.fluid-dynamics suggested either HDF |> or netCDF, neither of which I know anything about. Any opinions |> or other ideas? Thanks, There is a package from NCSA called UIFlow which, as I recall, helps you to set up CFD structures using HDF (through the BRICK library, also from NCSA). I don't know a lot about these tools, but you should be able to get more information from Kim Stephenson (kims@ncsa.uiuc.edu) or Ping Fu (pfu@ncsa.uiuc.edu). The software should be available on NCSA's anonymous ftp server ftp.ncsa.uiuc.edu After you have your results, PolyView (again, NCSA) is an excellent display tool for Silicon Graphics' machines. I think it would do nicely for CFD results. -Doug Ilg From yves@cih.hcuge.ch Thu Jan 19 10:31:43 1995 Newsgroups: sci.data.formats Path: solitaire.cv.nrao.edu!hearst.acc.Virginia.EDU!caen!spool.mu.edu!howland.reston.ans.net!news.sprintlink.net!pipex!uunet!fdn.fr!jussieu.fr!univ-lyon1.fr!swidir.switch.ch!news.unige.ch!usenet From: Yves LIGIER Subject: PAPYRUS file format Message-ID: <1995Jan19.083456.5764@news.unige.ch> Sender: usenet@news.unige.ch Organization: University of Geneva Date: Thu, 19 Jan 1995 08:34:56 GMT Lines: 73 The Digital Imaging Unit (UIN : Unite d'Imagerie Numerique) of the University Hospital of Geneva is pleased to announce the new releases of : - the OSIRIS software for medical image display & processing - the PAPYRUS file format based on DICOM (specification + toolkit). and a new WWW Medical Imaging home page. http://expasy.hcuge.ch/www/UIN/UIN.html OSIRIS DESCRIPTION OSIRIS is a software package for the display and manipulation of multimodality medical images. It was developed as part of a hospital-wide Picture Archiving and Communication System (PACS) at the University Hospital of Geneva (Switzerland). OSIRIS provides (just a few characteristics): - Interactive graphic user interface - Customizable display modes for images sets - Zoom, rotation, flipping of image sets - Color adjustment on full dynamic range - Magnifying glass - Annotations - Regions of interest (polygons, ...) - Measurements (distance, angle, surface, volume, ...) - Filters - Multiplanar sections of tomographic images - Region growing for automatic image segmentation - Histogram equalization OSIRIS is available for different platforms : - Macintosh - Unix / X11 / OSF-Motif - PC / Windows3.1 OSIRIS is a non commercial software distributed free of charge. Users interested in developing their own specific tools can obtain the full source code (developer license). PAPYRUS DESCRIPTION PAPYRUS is an image file format based on the ACR/NEMA standard. The last release (version 3.0) is based on the latest DICOM 3.0 standard. The specification are public domain as well as the toolkit which allows to easily read and write PAPYRUS image files. ANONYMOUS FTP OSIRIS is available on the following anonymous FTP server : Just log in as anonymous and put your email address in the password field. expasy.hcuge.ch in the directory : /pub/Osiris ftp://expasy.hcuge.ch/pub/Osiris WWW More information is also available on the following WWW site http://expasy.hcuge.ch/www/UIN/UIN.html For more information, do not hesitate to contact us : Dr. Yves LIGIER, Dr. Osman RATIB Digital Imaging Unit University Hospital of Geneva 1211 GENEVA 14 - Switzerland Fax : + 41 22 372 61 98 email : yves@cih.hcuge.ch From jhoward@solar.sky.net Sun Jan 22 01:49:11 1995 Path: solitaire.cv.nrao.edu!hearst.acc.Virginia.EDU!portal.gmu.edu!europa.eng.gtefsd.com!news.mathworks.com!udel!gatech!howland.reston.ans.net!news.sprintlink.net!solar.sky.net!solar.sky.net!not-for-mail From: jhoward@solar.sky.net (John Howard) Newsgroups: sci.data.formats Subject: Re: Rich Text Format Date: 21 Jan 1995 03:20:24 -0600 Organization: SkyNET Corporation Lines: 7 Message-ID: <3fqjko$erg@solar.sky.net> References: <790334401snz@denning.demon.co.uk> NNTP-Posting-Host: solar.sky.net X-Newsreader: TIN [version 1.2 PL2] Paul Denning (Paul@denning.demon.co.uk) wrote: : Can anybody point me in the direction of some information on the Rich Text : Format. I'm particularly interested in the graphic formats that Rich Text : supports. primate.wisc.edu:pub/RTF ftp://ftp.cray.com/src/WWWstuff/RTF From dwells@nrao.edu Fri Jan 27 16:33:58 1995 Path: solitaire.cv.nrao.edu!news.cv.nrao.edu!dwells From: dwells@nrao.edu (Don Wells) Newsgroups: sci.data.formats Subject: Re: Importing Formatted Data to C++ or OODB Classes? Date: 27 Jan 1995 20:05:53 GMT Organization: nrao Lines: 17 Distribution: world Message-ID: References: <3g1e9o$2dt@reuter.cse.ogi.edu> NNTP-Posting-Host: fits.cv.nrao.edu In-reply-to: phil@coot.geog.ubc.ca's message of 26 Jan 1995 19:13:30 GMT "PA" == Phil Austin writes: PA> In article <3g1e9o$2dt@reuter.cse.ogi.edu> benning@coho.cse.ogi.edu (Paul Benninghoff) writes: >> Is anyone out there aware of any work that attempts to translate >> Structured File Data (i.e. from CDF, HDF, ASN.1, or any of the formats >> of interest to this news group) to C++, Smalltalk or Other OOPL >> classes or to database schema? ... A FITS-to-C++ package has been constructed by Allen Farris (Space Telescope Science Institute) for the AIPS++ project; it is available at: ftp://fits.cv.nrao.edu/src/c++fits-04.058.{news,tar.gz} -- Donald C. Wells Associate Scientist dwells@nrao.edu http://fits.cv.nrao.edu/~dwells National Radio Astronomy Observatory +1-804-296-0277 520 Edgemont Road, Charlottesville, Virginia 22903-2475 USA From dilg@fishery.hitc.com Fri Jan 27 16:34:12 1995 Path: solitaire.cv.nrao.edu!hearst.acc.Virginia.EDU!caen!hookup!ames!newsfeed.gsfc.nasa.gov!newsroom.gsfc.nasa.gov!fishery.hitc.com!dilg From: dilg@fishery.hitc.com () Newsgroups: sci.data.formats Subject: Re: IEEE32bit format help Date: 23 Jan 1995 20:06:28 GMT Organization: Applied Reseach Corporation Lines: 31 Distribution: world Message-ID: <3g1284$dm6@newsroom.gsfc.nasa.gov> References: <1995Jan21.140057.1@usthk.ust.hk> Reply-To: dbuto@eos.hitc.com NNTP-Posting-Host: fishery.hitc.com In article <1995Jan21.140057.1@usthk.ust.hk>, mecks@usthk.ust.hk writes: |> Dear friend. |> |> I am working in mechanical field . I have some file |> in IEEE32 bit format . Can anyone told me where I can find those |> standard manuals . I means in what CD file , journal , manuals . I'm certain that you can get the official standard direct from IEEE, but I doubt you really need to go that far. Most workstations that I know of, as well as lots of other machines, use IEEE as their native format. Notable exceptions are IBM mainframes (and clones), VAXen and older DEC machines, and some "supercomputers." If you use a native IEEE machine, then your only concern is byte ordering (big- versus little-endian). You can usually tell if you've got the wrong byte order; especially if you have even a vague idea of what sort of data you're looking at. When you turn the bytes around, you're likely get absolute garbage. My advice: Try reading each number in as a regular binary floating point value (I assume you're talking about floating point) and writing it out with a normal formatted write. If the data looks completely useless, try reading them in normally, exchanging the bytes so that they are in reverse order, and writing them out with the same write statement you used before. If neither of these methods works, then you are not using a native IEEE machine and you'll have to find another way to get around this. Maybe custom conversion code. What kind of machine are you using? And by the way, where is ust.hk? -Doug Ilg