CONFOCALMICROSCOPY Archives

August 2016

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Mario Emmenlauer <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Tue, 2 Aug 2016 22:45:38 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (309 lines)
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
Post images on http://www.imgur.com and include the link in your posting.
*****

Hi,

On 02.08.2016 19:25, Andreas Bruckbauer wrote:
> 
> Dear Dennis,
> 
> I still don't get it, Remko wrote that it takes at least 500s based on the SSD transfer rate to load the 1 TB data set and more typically 30 mins. How could you load it in 60s? Please share the details.

There is no real magic here :) The "trick" is that the file contains
multiple downscaled versions of the dataset, a so-called resolution
pyramid. Several modern file formats now have this feature, even though
I think Imaris was one of the early adopters.

In any case, this feature is mostly helpful for fast opening and viewing
of the dataset. Several file formats combine this with a feature for
chunking, to load only certain "blocks" of the higher resolutions into
memory, as needed. This way, when you zoom in to any detail, only those
chunks that encompass the close-up region are loaded in high resolution,
which saves memory and loading time.

For the processing, it depends on the algorithm whether chunking can be
used or not. Certainly not all software, but also not all algorithms can
profit from chunking, because a transformation may require the full
dataset. If your image analysis software does supports chunking, and
the algorithm can work block-wise, then it can run with very low memory
requirements. In good cases you may be able to process virtually any
size datasets with just enough RAM for a few blocks, like a few MB of
RAM! Of course, eventually the full file needs to be loaded at a high
resolution to process it at this resolution.

To combine the features "resolution pyramid" and "chunking", lets say
your image resolution is much higher than the level of detail you want
to process. In this case, the image processing software may be clever
enough to load the lowest resolution that is sufficient to perform the
task! I.e. if you want to detect nuclei that are many times the size
of your image resolution, it might be sufficient to already load the 2x
or 4x or maybe even 8x lower-resolution version of the image from the
resolution pyramid, and perform the detection on there. In combination
with chunking, this will be several times faster and more memory
efficient.

But whether your software will do that for a certain task, depends on
the software, the file format, and last not least on the processing
you want to perform! So you would need to check this very precisely
for your application.

Cheers,

   Mario



> I will have a look at the file converter, but anyhow you have to first load the file to convert it, so it does not change much, maybe this runs as batch process. We are still trapped in this endless cycle of saving files, transferring them to storage, transferring them again to another machine (with fast SSDs) , potentially converting them to another format! Then loading them and analysing... Pretty mad. 
> I read today that the next generation of video games renders whole exploding cities (pretty mad too) in the cloud and transfers the data to gaming consoles or mobile phones, why can we not do this? Ok, it is the opposite problem, they have the vector data and want to get pixels out, whereas we have the pixels and want to get objects.   but it should be easier to cater for a few science labs than for the world gaming community.
> 
> Best wishes
> 
> Andreas
> 
> -----Original Message-----
> From: "DENNIS Andrew" <[log in to unmask]>
> Sent: ‎02/‎08/‎2016 09:35
> To: "[log in to unmask]" <[log in to unmask]>
> Subject: Re: analyzing really big data sets
> 
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> Post images on http://www.imgur.com and include the link in your posting.
> *****
> 
> Hi Andreas,
> 
> I was working with an IMS file, so it took about 60 seconds to load the 1.2TB file and its ready for analysis straight away, I did some spots analysis and tracking.  
> 
> I think you are using an non-native format, which is loading and previewing the data and analysis isn’t possible until the full file conversion is complete.
> 
> I’d very much recommend converting the file to IMS format using the file converter and then loading into Imaris, it’s definitely a better experience. The file converter doesn't need an Imaris licence and can be run on a separate PC if you want.
> 
> You mentioned the image Database (‘Arena’), we’ve got plenty of feedback from big data users, and as a result Arena was made an optional part of the Imaris install, so you can now run Imaris without Arena.
> 
> I hope this is helpful,
> 
> Andrew
> 
> -----Original Message-----
> From: Confocal Microscopy List [mailto:[log in to unmask]] On Behalf Of Andreas Bruckbauer
> Sent: 30 July 2016 08:40
> To: [log in to unmask]
> Subject: Re: analyzing really big data sets
> 
> EXTERNAL EMAIL
> 
> ATTACHMENT ADVISORY
> 
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> Post images on http://www.imgur.com and include the link in your posting.
> *****
> 
> Dear Andrew,
> 
> Are you sure you loaded the whole data set in 60s? My experience with Imaris is that it quickly displays a part of the data set but when you want to do any meaningful analysis (like tracking cells) it really tries to load the full dataset into memory. To analyse data sets of 10-20 GB we need a workstation with 128 GB RAM while Arivis works with very little RAM.  As I understand we are here talking about 100GB - 5TB, so loading the full dataset is wholly unpractical. Maybe something changed in recent versions of Imaris? I stopped updating since Imaris introduced this ridiculous database which fills up the local hard disk. What about using Omero instead?
> 
> Best wishes
> 
> Andreas
> 
> -----Original Message-----
> From: "DENNIS Andrew" <[log in to unmask]>
> Sent: ‎29/‎07/‎2016 23:39
> To: "[log in to unmask]" <[log in to unmask]>
> Subject: Re: analyzing really big data sets
> 
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> Post images on http://www.imgur.com and include the link in your posting.
> *****
> 
> Sorry Typo in an embarrassing part of my last message,
> 
> It should have said " today I loaded a 1.2TB data set, it took about 60 seconds. "
> 
> -----Original Message-----
> From: DENNIS Andrew
> Sent: 29 July 2016 23:17
> To: Confocal Microscopy List <[log in to unmask]>
> Subject: RE: analyzing really big data sets
> 
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> Post images on http://www.imgur.com and include the link in your posting.
> *****
> 
> Hi Esteban,
> 
> I work at Andor/Bitpane so you may consider this to be a commercial response..
> 
> I'm interested in your comment on Imaris, today I loaded a 1.2GB data set, it took about 60 seconds. When you refer to Big data, what sizes are you talking about?
> 
> Andrew
> 
> 
> ________________________________________
> From: Confocal Microscopy List [[log in to unmask]] on behalf of G. Esteban Fernandez [[log in to unmask]]
> Sent: 29 July 2016 20:43
> To: [log in to unmask]
> Subject: Re: analyzing really big data sets
> 
> EXTERNAL EMAIL
> 
> ATTACHMENT ADVISORY
> 
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> Post images on http://www.imgur.com and include the link in your posting.
> *****
> 
> I just wanted to echo the praises for Arivis and MicroVolution.
> 
> My favorite for 3D work is Imaris but it can't handle big data; when possible I downsample large datasets and work in Imaris. Amira can handle big data but in my experience it crashed more often than Arivis plus I prefer the user interface in Arivis.
> 
> MicroVolution results were comparable to Huygens and AutoQuant in my hands (qualitatively, I didn't do rigorous quantitative comparisons) in about
> 1/60 of the time with a lower end GPU. I mostly looked at confocal point-scanning data and didn't try truly big data. MicroVolution is limited to datasets <RAM, so you subvolume yourself before deconvolving.
> 
> -Esteban
> 
> On Jul 27, 2016 10:03 AM, "Andreas Bruckbauer" < [log in to unmask]> wrote:
> 
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> Post images on http://www.imgur.com and include the link in your posting.
>> *****
>>
>> Would it not be much better to perform the data analysis on a scalable 
>> cluster which has fast connection to the storage instead of moving 
>> data around? We need to push software companies to make their 
>> solutions run on these machines. Instead of buying ever bigger 
>> analysis workstations which are obsolete after a few years, one would 
>> just buy computing time. The cluster can be shared with bioinformatics groups.
>>
>> My take on storage is that you need to have a cheap archive, otherwise 
>> there will be a point at which you run out of money to keep the ever 
>> expanding storage.
>>
>> Best wishes
>>
>> Andreas
>>
>> -----Original Message-----
>> From: "Douglas Richardson" <[log in to unmask]>
>> Sent: ‎27/‎07/‎2016 15:34
>> To: "[log in to unmask]"
>> <[log in to unmask]>
>> Subject: Re: analyzing really big data sets
>>
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> Post images on http://www.imgur.com and include the link in your posting.
>> *****
>>
>> I'll echo Paul's endorsement of Arivis for 3D data sets and George's 
>> suggestion regarding Visiopharm for 2D data sets (I really love that 
>> it doesn't duplicate the data into yet another proprietary file type).
>>
>>
>> However, theses are both expensive and there are open source options 
>> as well.  One of our groups has a great open-source work flow for 
>> imaging and registering cleared brains (imaged & registered >80 
>> cleared brains, ~150TB of data). Here is the reference:
>>
>> http://hcbi.fas.harvard.edu/publications/dopamine-neurons-projecting-p
>> osterior-striatum-form-ananatomically-distinct
>> .
>> The Tessier-Lavigne lab just released a computational method (ClearMap
>> http://www.sciencedirect.com/science/article/pii/S0092867416305554 for 
>> a similar process, as has the Ueda group with their CUBIC method ( 
>> http://www.nature.com/nprot/journal/v10/n11/full/nprot.2015.085.html),
>> although these both mainly deal with ultra-microscope data which isn't 
>> as intensive as other forms of lightsheet.
>>
>> Big data viewer in Fiji and Vaa3D are also good open source options 
>> for viewing the data.
>>
>> On the data storage side, the above mentioned publication was done 
>> mainly with a filing cabinet full of 2TB USB 3.0 external hard drives.
>> Since then, we've run 10Gbit optical fiber to all of our microscopes 
>> and workstations.  Most importantly, this 10Gbit connection goes right 
>> through to our expandable storage server downtown.
>>
>> I think the two big lessons we've learned are the following:
>>
>> 1) Make sure your storage is expandable, you'll never have enough.
>> We're currently at 250TB in a LUSTER configuration with plans to push 
>> into PTs soon.
>> 2) You will always need to move data, make sure your connections are fast.
>> We have a 3 tier system: 1) Microscope acquisition computer > 2) 
>> Processing workstations > 3) Long-term storage server.  Connections to 
>> the cloud are not fast enough, so I don't feel this is an option.
>>
>> Finally, many versions of commercial microscope acquisition software 
>> are unable to directly save data to network storage (or external
>> drives) no matter how fast the connection. This is a feature we need 
>> to push the manufacturers for or else you'll always be limited to the 
>> storage space on your acquisition computer.
>>
>> -Doug
>>
>> On Wed, Jul 27, 2016 at 9:33 AM, Paul Paroutis 
>> <[log in to unmask]>
>> wrote:
>>
>>> *****
>>> To join, leave or search the confocal microscopy listserv, go to:
>>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>> Post images on http://www.imgur.com and include the link in your
>> posting.
>>> *****
>>>
>>> We have been facing the same issue since the purchase of our Zeiss 
>>> Lightsheet system. On the commercial side of things, Arivis has 
>>> worked
>> well
>>> for us and I would recommend giving that a shot. On the 
>>> deconvolution
>> side,
>>> we recently purchased the Huygens deconvolution module and it has 
>>> given
>> us
>>> nice results. We had also tested the Microvolution software and were
>> really
>>> impressed at the speed and quality of deconvolution - the price tag 
>>> put
>> it
>>> out of our range for the time being, but it's definitely worth exploring.
>>>
>>
> 
> 
> +++Scanned for Viruses by ForcePoint+++
> 
> 
> ___________________________________________________________________________This e-mail is confidential and is for the addressee only.  Please refer to www.oxinst.com/email-statement for regulatory information.
> 



Viele Gruesse,

    Mario Emmenlauer


--
BioDataAnalysis GmbH, Mario Emmenlauer      Tel. Buero: +49-89-74677203
Balanstr. 43                   mailto: memmenlauer * biodataanalysis.de
D-81669 München                          http://www.biodataanalysis.de/

ATOM RSS1 RSS2