CONFOCALMICROSCOPY Archives

August 1997

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Simon C. Watkins" <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Tue, 5 Aug 1997 09:11:20 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (64 lines)
Ian Gibbins wrote:

> >On Thu, 31 Jul 1997 12:01:55 -0400, Analytical Imaging Facility
> wrote:
> >>The ideal solution would be a server where we would charge a nominal
> fee
> >>for space used.  This would be the most robust and convenient.
>
> On the subject of data serving:

As we move further into digital imaging, our online requirements
continue to increase.  We support multiple platforms (PC, Mac, and SGI's
) all of which communicate using different protocols. The dilemma is how
much data needs to be kept on line at any one time and how rapidly
should the archive be accessible.  Our solution has been to develop a
hybrid system, where storage is distributed accross the platforms with a
centralized Novell server.  We try to keep up to a months worth of data
available at any time.The Novell server does not use Raid devices but
rather has a series of large volumes backed up to CD's once a week
incrementally. and cleared by month. The Novell server has the advantage
that is easily deals with NFS, Appletalk, and PC naming conventions,
however it has the down side that for each gigabyte of drive space 5 meg
of memory is needed.  Thus a server with 50gig of space needs
approximately 256 meg of ram.(you also need 5 meg for each OS).  However
with memory prices hovering at about $5-$7/meg, and large (27Gig) drives
costing less than $3K this is an inexpensive solution.  It is important
to realize that the server itself is a data server, thus can be fairly
low end.  (single pentium) however the networking lines and switches
should support 100mps transmission to and from the server.  This server
is supported by having large shared volumes on the SGIs as well as on
the macs, and finally each of the PCs has mounted large shared volumes.
This distributed method allows maximum flexibility and reliability.  At
present we have approximately 75 gig of online space.  Perhaps a more
critical issue is how to organize the archive such  that offline images
are indexed and easily recalled.  The images are indexed using Thumbs
Plus (http://www.cerious.com), a multiuser licence costs a few hundred
dollars and represents perhaps one of our best software investments to
date. As was discussed extensively a few weeks ago, the choice of CDs
for offline storage is economic and as the ISO 9660 standard is
ubiquitous allows easy transport between systems.   However, the limited
size of the CD is frustrating,  I expect that DVD will probably take
over as the "in house"archive in the near future. Another important
factor has been to offload the responsibility of archiving images to
users of our center.  This is done using an anonymous ftp server with a
Web front end.  Using this system we have been able to have 100%
reliability over the last 4 years.  However prior to this time when we
used tapes and MOs as backup we were unable to have complete confidence
in our data management.

What other solutions are others using?
simon



--
Simon C. Watkins Ph.D.
Associate Professor
Director CBI
University of Pittsburgh
Pittsburgh PA 15261
tel:412-648-3051
Fax:412-648-2004
URL:http://sbic6.sbic.pitt.edu

ATOM RSS1 RSS2