Stelircam Pipeline SAO Telescope Data Center

Stelircam Pipeline Processing

Last update: 2009-01-29 by Bill Wyatt


Pipeline execution
Pipeline output
Pipeline overview
Dark Exposure Creation & Subtraction
Flat Construction & Flat-Fielding
Sky Subtraction

Detailed Pipeline Documentation

Table of Contents

Initialization    --> Do ONE time only! <--

The STELIRCAM pipeline uses the UNIX/IRAF environment created by John Roll and Maureen Conroy of OIR. There are some configurations that should be set up, and commands to be run before attempting to run the pipeline.

First of all, your computer needs to be a Sun Sparc in the CF's domain, so that the various auto-mounted partitions are available, in particular /data/oir and /home/ircam.

It is probably a good idea to set up a separate IRAF home directory especially for the pipeline reductions, and also to copy the ``standard'' parameter files from the ircam home directory into it. A possible set of commands would be:

mkdir    $HOME/irpipe     $HOME/irpipe/uparm
cp   $HOME/ircam/unixiraf/uparm/*   $HOME/irpipe/uparm
These would set up an IRAF directory and the pointers to it as needed by the pipeline. Note that there is no need to run mkiraf in order to run the UNIX/IRAF commands.

Table of Contents

Pipeline execution

  1. Run the setup commands, if not in your .cshrc (and using the filenames as in the example above (note the trailing slash marks - they are necessary !) :
  2. If necessary, create and fill the directory with the raw data files.
  3. cd to the data directory.
  4. Run the command /home/ircam/bin/runpipe, in the background if you want.

That's all! The pipeline code will uncompress the data if necessary, change the Stelircam filenames by adding ``.fits'' to then if necessary, and will move the data into a subdirectory call ``raw'' and continue processing.

A file in the data directory called pipe.log is HTML and has hyperlinks to the detailed pipeline processing of the data at each stage. There is also a symbolic link to it, called index.html so that by  giving the directory name to a browser, the pipe.log file is automatically displayed.

Table of Contents

Pipeline Output

The pipeline processing programs generate output giving information about its operation and the results. The output has HTML and hyperlinks to allow viewing different degrees of detail.

If you are viewing the output of the automatic stelircam processing, the URL is This shows a set of directories of the form YYYYMMDD, corresponding to each night (MST date) of data. Click on the appropriate one to start reviewing the data.

If you are reducing data elsewhere, use the file browser option in your browser to open up the pipe.log file in the directory where you started the pipeline. Also, since the pipeline creates a symbolic link index.html that points to pipe.log, it is sufficient to click on the directory hyperlink as presented by a browser when viewing the parent directory.

Most of the hyperlinks are self-explanatory, but there are some graphical ones, too. An image such as (A small red ball) , called a ``redball,'' is a hyperlink to an image; which one depends on context. In the highest-level log, it links to a combined, median-filtered image in the Dark  Correction Creation section and to a combined, median-filtered image in the Flat Construction section. If your browser is set up to view FITS files, clicking on one of these brings up a viewer with the selected image.

If you go down a level to view the per-filter output for the sky-subtracted image results, there are five of these colored balls at the right of each line - red, yellow,  green, purple, and gray balls. These are respectively hyperlinks to the sky-subtracted image (i.e. the final result), the sky image subtracted from it, the flattened image (i.e. before sky subtraction), the dark-subtracted image, and finally the raw input image.

Note that our data purging policies are not firmed up yet, but the data logs and final images will probably be available for 90 days, and all the intermediate data for 30-60 days. The logs of the processing will be kept for perhaps 90 to 180 days.

Table of Contents

Pipeline Overview

The pipeline proceeds through the reduction much as any investigator would, but with some rearrangement of files and file names to simplify bookkeeping.

First, in the Setup phase, as already mentioned, the raw data is uncompressed if necessary, the ``.fits'' extension added if necessary, and the data moved to a subdirectory called raw.

In the Linearization phase, linearized data will in turn be placed in subdirectory linear,  with the file extension ``.fits'' replaced with ``.l.fits''. Similarly, in the Trim phase, the trimmed files will be in trim, with a file extension of ``.lt.fits''; in the Dark Exposure Creation phase, dark-subtracted data files will be in darksub,  with a file extension of ``.ltd.fits'';  in the Flat construction & Flat-Fielding phase, flat-fielded files will be in flat,  with the ``.ltdf.fits'' extension; in the Sky Subtraction phase, sky-subtracted images will be in skysub, with extensions of ``.ltdfs.fits''. Also,  the sky image subtracted from each science image is in the subdirectory sky, with the same extension,  and with the string ``sky_for.'' prepended to the file name.

To Table of Contents

Table of Contents


In this start-up phase, all the files are assumed to be in the current directory. Any files ending in  .r, .b, .r.fits, .b.fits, .r.gz, .b.gz, .r.fits.gz, or .b.fits.gz will be moved into a subdirectory called  raw, which is created if necessary.

Then, any compressed files (those with the .gz extension) are uncompressed. Finally, all files that do not end in .fits  have that ending added.

Any files beginning with focus are hidden by adding .hide on the end of the name, as these are not useful for science or calibration.

For a more detailed description, see runpipe and

Table of Contents


The linearization of the counts in the images is done using a set of previously worked out coefficients stored as images arrays in ``dated'' subdirectories under (by default) the directory /home/iraf/Lincor. The bad pixel mask is also included in these directories. The older linearization equation is:
S(true)=S(obs) - C5 * [C1 + C2 * S(obs)+ C3 * S(obs)**2] **2
where C1, C2, C3, and C5 are the coefficients, one set for each pixel. A newer formula and coefficients is being worked out for for the June, 2001 and later observations.

It is possible to supply custom arrays instead of these defaults; see the detailed discussion of dated parameters.

Finally, any pixels below a minimum value (default is 0.0) or greater than a saturation value (default 22,000.0 for the red side, 18,000 for the blue) are replaced with the minimum or maximum value, respectively, times the number of co-adds in the exposure.

The output image is written into the linear subdirectory, and the filename has the ``.fits''  extension changed to ``.l.fits'' as a reminder. For more details on linearization, see lincor.

Table of Contents


Since the Stelircam arrays have some bad pixels around the edges, this step trims off 5 rows and columns from the edges of the data set. Analogously to the above step, the output images are written into the trim subdirectory, with the extension updated to become ``.lt.fits''. More details are available.

Table of Contents

Dark Exposure Creation & Dark Subtraction

The pipeline now scans through the database of file header information allfiles.db to determine what type of dark exposures are required, printing out a summary table of a name, the Exposure Type (abbreviated ExpType) and the total exposure time. The exposure type is a concatenation of all the exposure characteristics that are unique. For (a little) readability, the string is made up of each parameter separated by underscores. Thus, an exposure type of 1_5.0000_basic_arc_0_2_1_4 decodes to:
1    Channel (0=Red, 1=Blue)
5.0000    Frame time, seconds
basic    Camera Mode
arc    Clocking or Array Readout Mode
0    Fastmode (0=off, 1= on)
2    Sample Mode (1=single, 2=double)
1    Number of Readouts
4    Number of Slow Counts

Generally, the frame time is the only parameter varied by the observer. There should be approximately 10 dark frames of each unique object frame time. Normally these are taken before the night starts. As an extra check on the dark current stability, the same set of darks should be taken in the morning. The pipeline will detect these two sets and compare them. The difference in mean level should be well less then the standard deviation as printed. More details are available.

Table of Contents

Flat Construction & Flat-Fielding

First, the data set is scanned to find the percentage of saturated pixels in each image. This saturation level defaults to 10,000 counts per co-added frame, and a limiting percentage of 5%. Any imaged that exceed the 5% criterion will not be used to construct flat-fields.

Next, the images are grouped into sets according to channel, filter, and lens.

Next, in each set, those images with exposure times less than 10.0 seconds are also rejected, unless all the images in the set are short-exposure. These are generally standard stars, and probably do not have the S/N in their background to be good flatfields.

For each set in turn, the 150 longest exposures are chosen to be median-filtered for the output flat. If there are not enough files to get a high enough signal-to-noise ratio (nominally 20 files, settable with the -g option), the flatfield is not made and an error is given, also listing all the files so rejected.

Finally, if the images are not all the same exposure time, they are scaled before combination. The output flat is written to the flat subdirectory. The summary information including the mean (before normalization) and standard deviation are also printed. That information is left in a file called flattypes.db. More details on flatfield creation are available.

All the non-dark images are now flattened, by dividing through by the appropriate flat. The output images have an extension of .ltdf.fits, and are written into the flat subdirectory. More details on flatfield normalization are available.

Table of Contents

Sky Subtraction

Sky subtraction is conceptually simple, but involves much calculation.

For each image, a set of images with the same filter and nearly the same sky level are chosen. Again, there is a distinction between images with exposure times less than 10 seconds and those with longer exposures.

In both cases, the lower limit on the number of input images is used if the background  levels differ by more than 5.0%, and the upper limit otherwise. For images with short exposures, only other short-exposure images are selected for use and the set size is at  least 14 and up to 22 images.  For images with longer exposures, only similar, long-exposure, images are selected, and  the set size is at least 9 up to 18.

Once the data set is chosen, it is median-combined into a sky image. The sky is subtracted from the selected image, which is written to the skysub subdirectory with the extension .ltdfs.fits. The sky image is written to the skysub/sky sub-subdirectory, prefixed with the string ``sky_for.''.

More details on sky subtraction are avialable.