CONFOCALMICROSCOPY Archives

March 2000

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jason Swedlow <[log in to unmask]>
Reply To:
Date:
Thu, 30 Mar 2000 22:55:01 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (164 lines)
Hi,

Like Lutz & Chris, I’m going to add my own few cents on deconvolution
and PSFs.  I apologize in advance for coming in late and deviating from
the current thread.  I usually sit on the sidelines and watch.  It’s
taken me a few days to find the time to put this together.  Anyway, I
know a bit about this topic since I did my PhD thesis with Dave Agard &
John Sedat. I have consulted for Applied Precison periodically since I
got my PhD in 1994 and currently own an Improvision system and a
DeltaVision system.

As Wes points out, the problem with deconvolution microscopy is that
this term refers to many fundamentally different approaches and the
current nomenclature doesn’t clearly distinguish between them.  You have
to be careful which you choose.  To my mind, the major difference
between the methods Wes originally listed is that the neighbor-based
methods (their design has been discussed previously) radically change
intensity relationships such that subsequent quantitative analysis,
ratioing, etc. is not possible.  In general, the various iterative
methods preserve the intensity relationships, so you can compare the
amount of nuclear and cytoplasmic staining, etc.  So it depends what you
need to use the images for.

One issue that both Wes and Lutz bring up is the use of empirical vs.
calculated PSFs.  Both seem to agree that if possible, it would be
"better" to use the empirical PSF.  This makes intuitive sense, and the
comparison has been done between empirical and theoretical PSFs, but it
hasn’t really been carefully shown for empirical vs blind deconvolution
on samples with a wide range of S:N (at least to my knowledge—if someone
has a good reference, please speak up).  And no, it’s not enough just to
show that the image looks "better"—a true test requires known standards
(but a standard can be real, like a microtubule), a range of S:N levels
and preferably a frequency-domain analysis of the restoration.  Tim
Holmes’ group has presented these types of analyses and shown that blind
deconvolution works very well in their images. Moreover, there is a
version of AutoDeblur especially designed for LSCM data.  While any
deconvolution method can theoretically deal with LSCM data, AutoQuant
has really pushed the development of specific filtering strategies to
handle LSCM data.

Then how do the iterative methods differ?  I’ll quote from my own
experience—not by much, if you just consider the algorithms themselves.
That is, they all restore the types of images people use to demonstrate
the methods (typically good S:N) and give significant improvements in
contrast and S:N.  The major differences are implementation.
Specifically, what types of filtering are used, whether the whole image
is handled or parts divided up, how long it takes to run, type of PSF,
etc. There are published comparisons between different methods, but
usually one group has rewritten another group’s method and omitted all
the "tricks" (filtering, padding, etc.) that make these things work.
Yes, it would be nice to have a clear comparison, but we would still
argue about what types of samples to use.  A true comparison will be
hard now, since most of the code is in the hands of commercial vendors.
As this thread shows, all the products have customers that swear by
their results.  I’m tempted to leave it at that.

But what about that pesky empirical PSF?  Is it that hard to measure??
Wes is right— your first one won’t be right.  But  as others have said,
it’s a matter of knowing what you’re doing, i.e., having a good protocol
(just like running a gel!).  But a "bad" PSF is telling you something.
Any of the problems that occur during a PSF measurement (e.g., stage
drift, refractive mismatches in the immersion media, temperature changes
due to heavy ventilation, lamp flicker, camera noise, etc.) also occur
during your experiment (the "biology" that we all want to do).  In
short, the PSF is reporting on the errors in your imaging.  You can
choose to ignore it, and deconvolve using a theoretical PSF or use blind
deconvolution.  But all those errors are still there and blind or any
other type of deconvolution, because of the assumptions made in the PSF,
can’t do anything about it.  (In principle, it would be possible to deal
with stage drift, scattering, etc. in software, but there are no
commercial versions of these corrections).  Alternatively, you can use
the errors in the PSF measurement to suggest ways to improve your
microscopy.  Then you can decide what deconvolution method you want to
use (blind, empirical, etc.).  And just reading the thread on this one
shows that the choice involves performance, as well as cost, ease of
use, etc.

I’m pretty sure that all the deconvolution microscope vendors will
include the measured PSF for the lens you buy with the microscope.
Obviously, this doesn’t help you if you only buy the software.

A VERY IMPORTANT POINT:  To my knowledge, all commercially available
iterative deconvolution methods assume a radially and axially symmetric
PSF (the averaging is done in Fourier space in all cases, I think).  The
axial symmetry is most dependent on the amount of spherical aberration
present in your image.  WHEN USING IMMERSION OBJECTIVES, IT IS SO HARD
TO SET UP A NON-ABERRATED IMAGING PATH THAT YOU SHOULD ASSUME YOUR
IMAGING LIGHT PATH GENERATES SPHERICAL ABERRATION.  THE DECONVOLVERS
HAVE BEEN SCREAMING ABOUT THIS FOR SOME TIME; IN THE LAST FEW YEARS EVEN
THE CONFOCAL TYPES HAVE PICKED UP THE CALL.  It’s easy to see—just look
for asymmetry in the out-of-focus rings above and below a bright
fluorescent source in your sample.  If you see any intensity asymmetry
(i.e., brighter above the object than below) as you focus up and down,
you have spherical aberration.  An image that contains spherical
aberration has degraded axial resolution and decreased signal (same
signal spread out over a larger volume).  Regardless of the
deconvolution method you use, YOU WILL NOT CORRECT FOR THIS.  What to
do??  You’ll need to 1) adjust the refractive index of the immersion
medium or 2) get a lens with a correction collar (like the 60x/1.2 water
immersion lenses that have become available over the last few years).
This sounds hard, but once you’ve picked the right oil (or the right
setting of the correction collar) for your sample it’s over with.

THE POINT: If you ignore what the measured PSF is telling you, you will
record an image with degraded signal-to-noise and resolution.  Why
should you care??  Your major interest is the biology??  Simply because
when you go to determine what the image is telling you, (e.g., whether
two or more components are colocalised), the aberrations present in your
image will give you false results—your "biology" will be wrong.  And no
amount or type of deconvolution or LSCM will change this.

SO HOW DO YOU COLLECT A PSF?:  Honestly, it’s not that hard.  It won’t
take you months.  You’ll have to order 3-4 bottles of oil from
Cargille—total cost roughly 50 dollars and some beads from Molecular
Probes.  Send me an email—I’ll send the protocol.

BUT THIS TAKES TIME AWAY FROM ME DOING AN EXPERIMENT!!!  Yep, but in
general you have to do the experiment correctly.  If you can get away
without worrying about some of the details, then do it.  But be aware of
the limitations.  And wouldn’t it be better not to have so many caveats?

And finally:

BOY IT TAKES A LONG TIME TO DECONVOLVE!!!!  This varies widely depending
on the actual hardware and software.  As I said, on the software side,
there are major differences in implementation (as well as type of
method—note Tim Holmes’ mention of five different algorithms in the
AutoQuant package).  Wes mentioned the differences he sees between a
current Pentium and an SGI.  Deconvolution calculations require a number
of large arrays representing different forms of the image to be stored
simultaneously.  This means that large amounts of data must be moved
around inside the machine, so computers with very fast buses actually
perform much better even with nominally slower processors (the clock
speed actually isn’t the best performance spec anyway).  If you have the
means, SGIs are built to move large amounts of data quickly, so they
really perform in these applications (nope, I don’t own SGI stock).
Scared of Unix??  We have a number of non-specialists using our SGIs and
they don’t seem to mind— all the file management etc looks like Windows
or Mac.  Plus, unlike the Windows or Mac boxes, they are truly
multi-tasking, so they don’t crash nearly as much.  Our current spec: 10
iterations of constrained iterative deconvolution on a 512 x 512 x 64
image takes about 3.5 minutes on a dual processor R12000 Octane.  This
beats the real data collection time (find the cell, set the imaging
conditions, take the image), so the experiment is now limiting.

Cheers,

Jason Swedlow


**************************
Department of Biochemistry
The University of Dundee
MSI/WTB Complex
Dow Street
Dundee  DD1 5EH
United Kingdom

phone (01382) 345819
Intl phone:  44 1382 345819
FAX   (01382) 345783
email: [log in to unmask]
**************************

ATOM RSS1 RSS2