Telescope Data Center
Computing Resources

SunFire V880 overview
[TDC Home]  [TDC Search] [OIR Home]

Last update: Friday, 29-Dec-2006 12:30:15 EST by Bill Wyatt

Sections:

Compute and Software resources

The Telescope Data Center makes available computers for the reduction of ground-based telescope data of all types. The older, conventional machine is named tdc and is a SunFire V880 with 8 750-MHz cpus and 32 GB of memory. It is managed by the Computation Facility with all their supported software available.

The TDC also has priority on 8 nodes of the hydra computing cluster. Separate accounts, as managed by the CF, are required to get onto this system. The cluster is located at CDP, so the TDC has supplied pool disk space at CDP, /pool/oircluster, for better I/O during cluster operations. It is only 738 GB so be sure to not fill it.

The tdc computer is open for use by anyone with a CF-domain account, and that has telescope data to reduce.

There are high-performance scratch disks available for those large data sets: /pool/tdc3 and /pool/tdc5. These are capable of I/O at over 70 MB/sec, and currently have 1 to 3 TB of space each. The /pool/tdc3 disk is restricted to OIR staff and students; only those in group oirgroup can use this disk. The other disk is open to anyone with a CF-managed account.

These disks are not intended for permanent storage, as they are not backed up and will be purged every week of data more than 90 days old. They are protected by parity and hot spare disks, so are more reliable than it might first appear.

In addition, for Megacam observers only, there is another scratch disk of 2.7 TB space, /pool/megascr1. You must be a member of the group megagrp to be able to use this disk. This disk is not currently being purged.

The above scratch disks are actually Raid-5 disk sets with a hot spare, so your files, even though not backed up to tape, will not be lost by the failure of a single disk in the set.

Sloan Digital Sky Survey

A selection of the SDSS Data Release 5 (DR5) databases and associated files has been transferred to CFA. Each of these has been condensed by choosing a (we hope!) useful subset of the columns, and converted the Starbase table format, which is files with rows of tab-separated columns of ASCII characters, manipulated by awk-like commands.

The majority of the "best" photometric database (not including the image thumbnails) and all of the spectral databases (as reduced at Princeton by David Spergel et al.) have been so condensed.

Follow the link: http://www.cfa.harvard.edu/oir/Docs/SDSS-cfa.shtml

Usage priorities

We are not currently trying to enforce the use of the tdc system to only those with optical and infrared ground-based data. We do have a priority order, however:

  1. Reduction of MMTO, FLWO, and Magellan data
  2. Non-CFA optical ground-based data reduction
  3. Non-optical ground-based data reduction
  4. Everything else

We are beginning to have to limit class 4 items, and have more than originally expected work that falls under class 2. As future MMT instrument data flows in, we expect we will need to be more restrictive about use of the tdc computer.

Note that the cfa0 computer is identical to tdc except for the size of its directly-connected scratch disk space.

Summary:
name:  tdc
cpus:  8 x 750 MHz
memory:  32 GB
Ethernet:  1000 Mbits/sec
disk: 
/pool/tdc3 - 1.3 TB scratch disk for oirgroup, 90-day purge
/pool/tdc5 - 3.3 TB scratch disk, 90-day purge
/pool/megascr1 - 2.7 TB scratch disk for group megagrp, no purge

Raw data archiving

The TDC maintains raw data archives of FLWO, MMTO and Magellan instruments. Most FLWO and MMTO data is brought to Cambridge automatically by the internet and taped. The data remains available (to users with appropriate permissions) on disk as space permits.

Magellan data is contributed by observers and PIs and stored locally on disk and on tape. The TDC makes no other use of this data. There are two summary files of the data in this archive, one sorted by date: summary_DATE.db and one sorted by PI: summary_PI.db.

For FLWO and MMT data, see the summary table for details.