CONFOCALMICROSCOPY Archives

August 2015

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Rob Campbell <[log in to unmask]>
Reply To:
Date:
Mon, 31 Aug 2015 12:51:26 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (41 lines)
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
Post images on http://www.imgur.com and include the link in your posting.
*****

Hello,

I have some limited experience of Arivis, which I used briefly for 
exploring out 200 GB  to 1 TB data sets. My feeling was that it was 
useful for making attractive visualisations (e.g. to make a movie for a 
talk) but it added little for actually analyzing data (i.e. extracting 
information on which we can do statistics). This may have changed by 
this point, as I tried it a few months ago.

Of course YMMV, but I find analyses of these large data comes in two 
flavours:

1) Extracting information from each high-resolution x/y plane. This 
doesn't require "3-D" visualisation. It just requires the section to be 
loaded into RAM and features extracted. We just ensure we have about 2 
to 5 GB of RAM per core and do everything in MATLAB, Python, or even Icy.

2. The second set of analyses I find myself doing involves 
quantification of large-scale anatomical features in the whole volume. 
This might involve stuff like registering volumes to each other or 
tracing large features. If the original resolution was, say, 0.5 microns 
per pixel, these large-scale analyses work at the level of, say, 25 
microns per pixel. For these analyses I prefer to simply down-sample the 
whole volume and load it into RAM and use the most applicable analysis 
approach.





-- 
Rob Campbell
Mrsic-Flogel Group
Basel Biozentrum

ATOM RSS1 RSS2