CONFOCALMICROSCOPY Archives

July 2007

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Glen MacDonald <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Tue, 24 Jul 2007 12:45:23 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (142 lines)
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal

Huygens will sub-divide your  volume into sub-volumes, referred to as  
"bricks".  this is an option that you may turn off.  This will allow  
staying within available RAM, but increases processing time since the  
bricks overlap in all 3 dimensions to prevent edge effects.  I've not  
seen this damage the results.

Adjusting the computed PSF based upon the characteristics of the  
volume with depth is common to all deconvolution software.  A search  
of the listerv archives going back maybe 2 years should bring up  
discussion of this.  The PSF is dependent upon degree of spherical  
aberration, which in turn is dependent upon depth and refractive  
index changes within the sample.  A measured PSF may be far less  
accurate if the bead is in a region that is not representative of RI  
changes within the sample or the depth at which you are interested.   
Compare the results of a measured PSF vs. calculated in a known  
sample (as best as can be known).

regarding your question on loss of detail, that may depend upon how  
several factors: adequate sampling frequency, correct parameters  
given to the software (or guessed by blind algorithms), the SNR  
approximation and number of iterations.  Underestimating the SNR has  
led to blurring in my hands.  I've also seen detail lost in images  
where the software failed to reach the quality level and continued on  
with number of iterations set too high.

Regards,
Glen



On Jul 24, 2007, at 11:53 AM, Sarah Kefayati wrote:

> Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi- 
> bin/wa?S1=confocal
> Thanks Carl for your reply!
> so you think by using this technique we will destroying some  
> original data?
> and do you now if huygens software does this technique too?I mean  
> reconstruction!
>
> thanks Carl
> sarah
>
>
> On 7/24/07, Carl Boswell <[log in to unmask]> wrote: Search  
> the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa? 
> S1=confocal
> Hi Sarah,
> My interpretation of sub-volume sampling is that the algorithm  
> processes a portion of the image set, in 3-D, then reconstructs the  
> whole image when done.  I think it has to do with efficiency of  
> processing, especially if RAM or disk size is limiting.  It does  
> not do so a slice at a time, but rather a subset of the 3-D  
> volume.  With the older Autoquant versions I had experience with,  
> it was most efficient if the dimensions were multiples of 2.
>
> As for the variable PSF, that sounds like a completely different  
> topic, and one that I would be cautious of unless someone could  
> prove to me that it was not skewing the original data.
>
> Cheers,
> Carl
>
> Carl A. Boswell, Ph.D.
> Molecular and Cellular Biology
> University of Arizona
> 520-954-7053
> FAX 520-621-3709
> ----- Original Message -----
> From: Sarah Kefayati
> To: [log in to unmask]
> Sent: Monday, July 23, 2007 8:38 AM
> Subject: subvoluming
>
>
> Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi- 
> bin/wa?S1=confocal
> hello every one!
> searching in confocal list to gather more info about convolution,i  
> saw a reply by David Biggs talking about subvoluming:
>
>
> "In 3D deconvolution you are typically working with a volume of  
> data and
> a spatially invariant PSF, such that it does not vary over the volume
> being processed.  This is necessary because of the use of Fourier
> Transforms to efficiently calculate the convolutions required for
> processing.  In blind deconvolution, the PSF is also modified,  
> however,
> it is still invariant over the volume being processed.
>
>
>
> The interesting thing is that you don't have to process your entire
> volume in one deconvolution operation.  You can have the software
> subvolume the dataset into smaller chunks and have the blind
> deconvolution determine a different PSF for each subvolume, thus
> allowing some form of spatial variation over the image.  The  
> subvoluming
> can be in either the XY plane or in Z depending upon how your software
> is setup.
>
>
>
> As you suggest, having a spatially varying PSF in Z may allow  
> spherical
> aberrations to be compensated for.  Subvoluming is an approximation to
> the true imaging model if you have a spatially varying PSF.  There has
> been work done in proper spatially varying PSF deconvolution, often in
> 2D for astronomical imagery such as that from the Hubble.    
> However, the
> computing requirements are significant, even compared to current 3D
> deconvolution!
>
>
>
> Subvoluming is also beneficial when you dataset is larger than that
> which can be processed by your computer system, though you do need  
> to be
> careful of any potential blending artifacts that may occur at the
> boundaries.
>
>
>
> In our experience subvoluming of confocal data works extremely well."
>
> I'm trying to make it clear for myself,does it mean calculating the  
> psf for each layer during the z scanning(while we are collecting z  
> stack) and then we should use different psf during the 3D  
> deconvolution depending on how deep we are in sample?could you  
> please discuss more about this technique.
>
> thanks
> sarah
>
>
>
>

ATOM RSS1 RSS2