How is Ampex's installed base reacting to its double density recording format? Here's one opinion with clout. This is a progress report for the fourth quarter of 1996 for a supercomputing project at the University of Colorado. Note how this project utilizes the leading edge in processing, networking, and storage. Note also how the eggheads have managed to tweak the Ampex drives to perform at 20 MB/s, instead of the advertised 15 MB/s.
caswww.colorado.edu:80/CF.d/NSF96.d/ProgReport.d/progreport.period4.hpcc.html
"...There is no use having immense datasets on line in the lab if they cannot be archived. Our data sets consist almost entirely of files larger than 100 MB. This large granularity of our data can be exploited by investing in tape archiving equipment with a similarly large granularity. We have worked with Ampex Data Systems in this endeavor since 1992. Their latest generation of tape drives, of which we have a representative on loan to the LCSE, writes tape cartridges with 330 GB capacity. Considering that a complete run simulating an entire model star, like the one discussed earlier, would generate about 2 TB of data for archiving, 330 GB tape cartridges are very attractive. Because we have written special software to increase the performance of the Ampex tape drives and made this available to Ampex customers, Ampex has loaned the LCSE a model 410 tape drive which includes a 7-cartridge robotized stacker. This gives us an on-line capacity of just over 1 TB, using the older 165 GB tape cartridges. With the double density tapes and equipment, this system could hold over 2 TB, or all the data from a single GC simulation of turbulent convection.
Because of our close working relationship with Ampex and also, of course, with the NSF supercomputer centers, the LCSE team served as the agent to put together a set of coordinated Ampex D-2 tape drive purchases, at very special pricing, which included our Grand Challenge team-mates in Colorado, their collaborators at Lockheed in California, our LCSE lab (using DoE funds), the Pittsburgh Supercomputing Center, and the Cornell Theory Center. The purpose of this arrangement was to enable convenient, inexpensive, and vast data movement between these parties. From the LCSE perspective, this project has been frustrating because of PSC's insistence that the Ampex drive operate on a DEC machine, even though we arranged for Ampex and SGI together to donate an appropriate SGI host system to PSC at no cost along with the heavily discounted tape drive purchase. During the Supercomputing '96 show, Tom Ruwart of the LCSE finally got the PSC Ampex drive going, over a year after its purchase, now that PSC had finally broken down and bought an SGI workstation. In this process, a continued driving force from our Colorado collaborators and punctuated interventions by LCSE staff with either Ampex or PSC, or both, was the key to our eventual success. The frustration of terabytes of data bottled up at PSC from years of computing there and all our fancy visualization hardware (useless to this project without the data) sitting at the LCSE or at Colorado was what kept this project moving along. We hope that other data-starved supercomputer center users can benefit from this work without having to share in the frsutration. Projects involving supercomputer analysis of experimental or observational data are natural customers for this data archiving and data transfer technology. The National Radio Astronomy Observatory (NRAO), a recent partner of NCSA, is one substantial NSF activity that could benefit directly and immediately from these developments......" |