pipeline package

Subpackages

Submodules

pipeline.environment module

environment.py defines functions and variables related to the execution environment.

pipeline.environment.compare_casa_version(op, ver)

pipeline.recipereducer module

recipereducer is a utility to reduce data using a standard pipeline procedure. It parses a XML reduction recipe, converts it to pipeline tasks, and executes the tasks for the given data. It was written to give pipeline developers without access to PPRs and/or a PPR generator a way to reduce data using the latest standard recipe.

Note: multiple input datasets can be specified. Doing so will reduce the data

as part of the same session.

Example #1: process uid123.tar.gz using the standard recipe.

import pipeline.recipereducer pipeline.recipereducer.reduce(vis=[‘uid123.tar.gz’])

Example #2: process uid123.tar.gz using a named recipe.

import pipeline.recipereducer pipeline.recipereducer.reduce(vis=[‘uid123.tar.gz’],

procedure=’procedure_hif.xml’)

Example #3: process uid123.tar.gz and uid124.tar.gz using the standard recipe.

import pipeline.recipereducer pipeline.recipereducer.reduce(vis=[‘uid123.tar.gz’, ‘uid124.tar.gz’])

Example #4: process uid123.tar.gz, naming the context ‘testrun’, thus

directing all weblog output to a directory called ‘testrun’.

import pipeline.recipereducer pipeline.recipereducer.reduce(vis=[‘uid123.tar.gz’], name=’testrun’)

Example #5: process uid123.tar.gz with a log level of TRACE

import pipeline.recipereducer pipeline.recipereducer.reduce(vis=[‘uid123.tar.gz’], loglevel=’trace’)

class pipeline.recipereducer.TaskArgs(vis, infiles, session)

Bases: tuple

property infiles

Alias for field number 1

property session

Alias for field number 2

property vis

Alias for field number 0

pipeline.recipereducer.reduce(vis=None, infiles=None, procedure='procedure_hifa_calimage.xml', context=None, name=None, loglevel='info', plotlevel='default', session=None, exitstage=None)[source]
pipeline.recipereducer.string_to_val(s)[source]

Convert a string to a Python data type.

Module contents

pipeline.initcli()[source]
pipeline.log_host_environment()[source]
pipeline.show_weblog(index_path='', handler_class=<class 'http.server.SimpleHTTPRequestHandler'>, server_class=<class 'http.server.HTTPServer'>, bind='127.0.0.1')[source]

Locate the most recent web log and serve it via a HTTP server running on 127.0.0.1 using a random port 30000-32768.

The function arguments are not exposed in the CASA CLI interface, but are made available in case that becomes necessary.

TODO: Ideally we’d serve just the html directory, but that breaks the weblog for reasons I don’t have time to investigate right now. See https://gist.github.com/diegosorrilha/812787c01b65fde6dec870ab97212abd , which is easily convertible to Python 3. These classes can be passed in as handler_class and server_class arguments.

pipeline.stop_weblog()[source]