CONFOCALMICROSCOPY Archives

August 2007

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Mark Cannell <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Thu, 23 Aug 2007 10:39:13 +1200
Content-Type:
text/plain
Parts/Attachments:
text/plain (87 lines)
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal

Steffen Steinert wrote:
> Before seriously starting to investigate biological matters, I 
> remembered the statement, which I also came across with in James 
> Pawley´s recent book: "Deconvolve everything!!".
Yes!
> I´m not an expert in Deconvolution and never ran it myself before, so 
> would you experienced "Deconvolutionists" agree that it´s worthy for 
> 2D-images?
Yes!
> 2D-Deconvolution would clearly be limited compared to "ideal" 
> 3D-Deconvolution, since one does not have the information from 
> adjacent planes. Thus, it´s quite inaccurate if not even impossible to 
> remove out of focus light, isn´t it? 
Yes!
> On the other hand, one could still recover high spatial frequencies 
> which were attenuated by the OTF (actually by the NA) and not to 
> forget removing Poisson noise. Is the noise removal due to the fact 
> that features smaller than the PSF size will be neglected during 
> Deconvolution or because the spatial frequency of noise is most likely 
> to be out of the OTF limit (2NA/lambda and NA²/(2*n*lambda)? Is there 
> a reference at what range of frequencies noise would be expected or is 
> it rather evenly distributed (i.e. when looking at the 
> Fourier-transform of an image)?

Yes, decon. still provides the optimal filter for the data. The out of 
focus light cannot be removed (unless the sample is so thin there is 
none or you are using TIRF) without 3D data. The noise occupies the 
entire sampling frequency range.
> Due to the missing automated stage I can´t record the PSF of my 
> system. Normally one records the PSF/OTF at the actual system, 
> basically to include eventual misalignment, underfilling of the back 
> aperture etc. If I remember correctly, I think the Deltavision guys 
> measure the objectives separately and use this measured PSF for 
> Deconvolution (leaving blind-decon aside for now). Is this because of 
> the high precision of their scope, hence eliminating eventual 
> misalignment effects on the OTF? What are general experiences in terms 
> of measuring an objective´s PSF at a different system compared to the 
> "real" one in the actual scope (spherical abberations, asymmetric 
> shapes)?
Provided you have a good RI match and proper rear aperture 
illumination,  the PSF for a high quality lens (say plan apo) is very 
close to the theoretical wide field prediction. If you don't pay 
attention to these factors then aberrated PSF always result. It should 
be noted that pure spherical abberation may not be as bad as you might 
think as it may be offset by defocus in thin samples.
> As to deconvolution results. How can one judge whether an algorithm 
> produced a correct result? Obviously, an experienced eye will see most 
> of the artifacts, but what are objective, reliable and measurable 
> characteristics in order to say that this image has been improved or 
> that one is clearly ruined? Signal-to-noise-Ratio, Fourier analysis, 
> Image properties (i.e. speckles, ringing effects)?
A very big question and one that has no simple answer. There is no 
'correct' result in the presence of noise. The question is what result 
is 'closest' and your definition of 'closest' needs to be spelled out. 
Is 'close', the least noisy or the highest spatial frequencies etc.?
> What is an appropriate way for defining the S/N-ratio for images? I 
> ran across many different methods,such as: mean(I)/std(I) or 
> max(I)/std(I) or maybe applying a morphological operation, 
> distinguishing signal and background and then 
> mean(signal)/sqrt(mean(noise)+mean(signal)). Which one is a commonly 
> accepted method for S/N calculation in image processing?

s/n is just that, signal over sqrt varience(signal). Just make sure you 
know what the signal is (i.e. not background)... You can calculate the 
varience from repeated images.
>
> With respect to appropriate Deconvolution algorithms, essentially all 
> references say that iterative algorithms are superior to linear 
> filters (due to applying constraints (i.e. non-negativity), not a 
> simple high-pass filter, etc.).  Because of the missing PSF in my 
> particular case, Blind-Decon seems to be the right choice. Or does 
> someone disagree on that one?
Why don't you actually measure the psf? Failinmg that use the calculated 
wide field psf?
> Applying the blind-deconvolution in Matlab to various images of 
> fluorescently stained cells, also having different noise levels, 
> revealed rather disappointing results. 
I am not a fan of blind deconvolution. It is unclear to me that in a 
complex sample with no 3D data that it could possibly work...

Hope this helps.

Cheers

ATOM RSS1 RSS2