CONFOCALMICROSCOPY Archives

February 1995

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Paul Goodwin <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Mon, 6 Feb 1995 09:48:02 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (161 lines)
Warning- Jim's original discourse follows so this message keeps getting
longer. If you don't give a rip about this stuff DELETE now.
 
Now I'm just a poor little ol' biologists from the Big Woods, but it
seems to me that the argument that Jim is making is very common amongst
Engineers.
 
In principle, the deconvolution should be based upon a delta function,
that is an infinitely small, infinitely intense single point of light,
the equivalent of the perfect "click" in sound theory or the perfect door
slam in automobile engineering. Such a function contains infinite energy
in an infinite series of frequencies. An analysis of the the frequency
response of such a function through a black box yields a function that
characterizes the black box independent of the components in the box. Now
any engineer worth her/his salt knows that you cannot generate a delta
function, although the Big Bang came close (humor). To get around this,
and many other applied math problems, engineers make approximations.
Don't scoff, you fly and drive and use in your confocals and depend on
these approximations every day. The question is how good are your
approximations and how sound were the assumptions that permitted you
simplify the equations so that you could use your approximations. In
part, that depends on how good the black box was to begin with. If the
function is not consistent, if the math is bad, etc. then the method
won't work. In addition, fine tuning in a reiteritive alogrithm can test
some of the assumptions to fine tune the results.
 
Ultimately, any method comes down to how well does it work for my needs.
Just as there is no single confocal that meets every application, neither
is there any method that meets every need. I would not recommend
deconvolution for studying planar data (see Guy Cox's message). Nearest
Neighbor or confocal would be better. But in those instances where the
assumptions are well met, like FISH, most IMC, organelle localization,
and in those applications where confocal may cause biological artifacts
(like live cell work- see Dick Macintosh's paper on tubulin assembly), or
where you can't afford to buy a different laser for every wavelength you
need (have you priced UV lasers lately?) some other methods are worth
exploring.
 
What I am trying to reach here is some level of consensus so that when we
talk resolution, we can use a common language. When we talk sensitivity,
or signal to noise, I would like to have some agreement on the
definitions before starting the tests. This applies not just to comparing
confocal to other technologies, but in comparing confocal models and
vendors. Wouldn't it greatly simplify the selection process if there were
establiched gold standards by which systems could be compared? Sure, just
like the computer industry, the "benchmarks" would be varied. This helps
to match systems to needs. And sure the old adage would hold- there are
lies, damn lies, and benchmarks. But we could at least have a common
language.
 
Ultimately, this ol' boy from the Big Woods cares about one thing. How
does it perform for the science I do. And for now all I can say about
deconvolution and confocal is once I was blind. I now see everyday with
deconvolution things I ain't never seen with my confocal.
 
________________________________________________________________________________
 
 
Paul Goodwin
Image Analysis Lab
FHCRC, Seattle, WA
 
On Fri, 3 Feb 1995, James Pawley wrote:
 
> Dave Piston wrote:
>
> >More resolution. . .
> >
> >In principle, the Fourier spectrum is a good idea, and may prove useful
> >in practice as well.  The problem comes in the number of photons collected
> >in the data.  It is well known that the information content (and thus the
> >resolving power) in an image depends on the signal-to-noise, which
> >is greatly limited by the shot noise (square root of the number of
> >photons collected).  However, the exact dependence is still unsolved, to
> >my knowledge.  The second problem is that deconvolution, to an extent,
> >acts as a low pass filter of the original data, and this in turn changes
> >the noise characteristics, which would make direct comparison of raw
> >confocal data sets with deconvolved data sets difficult.  Perhaps it would
> >be wise to also process the confocal data set with the same deconvolution
> >software before comparison.  This should be doable as long as the confocal
> >images are the same depth (number of bits = 16) as the widefield images.
>
> I heartily agree with Dave. S/N is both crucial and complex. At the
> extreme, if we assume that the imaging process is "Linear and
> Shift-invariant" (conditions for deconvolution as per Peter Shaw:
> Conditions met in fluorescent imaging, widefield or confocal), in the TOTAL
> absence of noise (statistical or otherwise), using infinitely small pixels
> (you can never get around Nyquist) and given PERFECT knowledge of the
> point-spread function, ANY set of 3D image data can be deconvolved (using a
> perfect computer!) into one, and only one object-function i.e. the
> processed image could have only one solution and hence the "spatial
> resolution" would be infinitely high.
>
> There are, of course, "infinitely many" snags: leaving aside bleaching,
> collecting the image would take an infinite time, ditto for recording the
> PSF etc. etc.
>
> How does this work?  Well, the "shift-invariant" part of the condition says
> that any image is just made up of point objects, each of which will be
> blurred in the same way (Tim Holmes' Blind Deconvolution/Minimum Entropy
> Method doesn't assume that you know the PSF but still assumes that there is
> one.). So imagine a 3D intensity plot of a point object (i.e. a 3D Airy
> disk image of an INFINITELY small source).  If you recorded EXACTLY such a
> distribution with the imaginary microscope described above, then you would
> know that you were imaging a point.  If, however, your PSF was recorded not
> from an infinitely small source but, for instance, from a 200nm bead, then
> your imaginary microscope would record a very slightly different 3D data
> pattern and your computer would tell you that this pattern could only have
> been made from a distribution of points defining the original 200nm object
> and been blurred by the perfectly-known PSF.  This would be true no matter
> how small the "actual" source was.  Although this ability clearly implies
> infinite "spatial resolution", a more conventional way of getting
> resolution into the story would be to place two, 10nm sources say 10 nm
> apart.  Of course, to look at this object by eye it would appear to be a
> single point. However, as long as the signal has no noise and you know the
> PSF, the computer could easily tell you that such a 3D image-intensity
> pattern could only have been made by two 10nm sources, 10nm apart. Change
> the numbers and this is always true.
>
> The problem is that the "shape-intensity" of 3D data set recorded from two
> points may differ from that of a single point by <1%, or <0.00001%
> (depending on how close the two points are together).  Assuming that the
> PSF must be defined over a few hundred voxels (at least), in the "MoreReal"
> world where you have to count photons (and put up with Poisson statistics)
> you will soon need to collect very large numbers of photons in order to
> (accurately!) see a 1% signal change in the brightest of these voxels (with
> some fixed statistical error) so that your computer can distinguish the
> "one-point" distribution from the "two-point" one.  The situation gets even
> worse if we allow the intensity of the two points to vary separately.
>
> This complexity is the source of the difficulty in coming up with a method
> of measuring the "resolution" of deconvolved images.  The result depends
> crucially on the S/N of the data AND knowing the PSF no matter how much
> specimen may be in the beam path.
>
> The entire story is very similar to the "deconvolution" of
> energy-dispersive x-ray spectra except, in this case, the PSF changes
> markedly with the energy on the x-ray and to a lesser extent with the
> count-rate. Again, the higher the S/N, the more success you will have in
> "separating" peaks that appear to over lap (it is not uncommon to have
> 10,000,000 counts in a peak and the sampling intervals <0.5% of its FWHM
> in order to see a 1% impurity in the presence of a "nearby" peak.)
>
> So the bottom line is beware of claims of "resolution", especially where
> deconvolution is involved.  At present, it can only be call "undefined".
> Contrast transfer function is a better bet but, it is important to remember
> that when the CTF is reduced to say 10% it requires at least 100x more
> photons to "see" features this size than to see much larger features. We
> must stop thinking that the "resolution limit" is something such that you
> can see everything down to that size just fine but can see nothing smaller.
> It doesn't work that way in "Diffraction land."
>
> Sorry to have gone on so long.  Must be a hobby-horse.
>
>                    ***************NEW ADDRESS**************
> Prof. James B Pawley,                                        Ph.  608-263-3147
> Room 1235, Engineering Research Building,         NEW NEW NEW FAX 608-265-5315
> 1500 Johnson Dr. Madison, Wisconsin, 53706.
> [log in to unmask]
>

ATOM RSS1 RSS2