CONFOCALMICROSCOPY Archives

January 1997

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Paul Goodwin <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Wed, 29 Jan 1997 10:13:38 -0800
Content-Type:
TEXT/PLAIN
Parts/Attachments:
TEXT/PLAIN (104 lines)
The way I look at the basic theory goes back to a class I had in grad
school. The idea is that you have a black box that you are trying to
characterize. For arguments sake, lets assume that it has a weight and a
spring in it. To characterize the black box, you could try to take it
apart, analyze the components, and put it back together again, but in
doing so you may very well change the very thing that you are tying to
measure (Heisenberg). The other way to do this is to apply a waveform to
the box (shake it) and measure how the box behaves. If you do this for all
patterns of shaking you could characterize the box. It turns out to be
easier than that. If you could provide to the box a "shake" that has all
frequencies in it, then you could do it all at once. This magic shake is
the same as kicking the box, infinitely quick, infinitely hard, also known
as a delta function. We know this works because we use it all the time.
When you set out to evaluate a new automobile, you "slam the doors and
kick the tires". You apply a limited form of the delta function. Believe
me when I say that you would get a very different response from say a
Lexus or Mercedes than you would a Yugo.

Well the equivalent in imaging is to measure the response to an infintely
small, infinitely bright point of light. This can be approximated by
measuring the 3-D response of a bright object that is smaller than the
defraction limit of the optics. We often use an 80 nm fluorescent
microsphere. From this 3-D image (the point-spread-function, or psf) we
can calculate a function that describes the frequency response of the
optics in our whole black box of a microscope. This is the
optical-transform-function or the otf. Once you have this, then you can
take a complex data set through the microscope (i.e. a stack of images).
This stack of images represents "reality" (where the fluorescence is)
convolved with (or confused with) the bluring function of the microscope.
In the magic world of mathematics, there are a number of ways that one can
then unconfuse (i.e. deconvolve) the reality out of the complex data set.
This then gives a stack of images that represent the best guess of where
the fluorescence really was before it was confused by the blurring of the
optics.

The system is limited by how much signal there is in the initial image
relative to the amount of noise. So doing all that one can to minimize the
many types of noise or errors in the image improves the performance of
system. The other problem is that this method assumes that the psf is
constant throughout the specimen or that at least that the changes are
small. Confocal assumes the same. The only system that I know that
doesn't is blind deconvolution where the otf is calculated within the
volume of the data set. I must say that this still has me mystified.

I hope that this was helpful. Please, let's not start another war here
(I'm so tired of these). I know that this is terribly simplyfied. It seems
that the initial participant needed some basics. For further information,
the papers that Robert Cork refers to are good. It would also be worth
while to look at:

Digital Image Processing
Kenneth Castleman
Printice-Hall, 1979

and

Introduction to Fourier Optics
Richard Goodman
I'm not sure of the publisher
1968

I hope that this helps.
________________________________________________________________________________


Paul Goodwin
Image Analysis Lab
FHCRC, Seattle, WA

On Wed, 29 Jan 1997, Cork, Robert, John wrote:

> At 08:59 AM 1/29/97 -0500, you wrote:
> >Hi
> >
> >I was following the a discussion in the Confocal archives around 3/95
> regarding deconvolution of images.  It was said at one point that to
> accurately deconvolve an image, the complete system must be known.  Does
> anyone know what exactly must be "known", how one goes about getting that
> information, and is there software that can take that information to deblur
> or deconvolve an image?
> >
> Hello,
> There are a number of variations for deconvolution algorithms but they all
> basically need information about the objective (N.A., working distance) and
> the microscope setup,( refractive index of the medium wavelength of light
> used etc.). All  of these fators affect a function called the point spread
> function which describes how the image obtained from  a single point source
> will appear.  There are various ways of calculating a PSF either
> theoretically or empirically using fluorescent beads. For more details see
> some of the papers by Agard (e.g. Meth. Cell Biol. (30), 353-377)
> or Ann. Rev. Biophys. Bioeng. (13), 191-219).
> I also have a paper that describes some of these methods and software to
> calculate the PSF (Meth. Cell Biol. (40), 221-240).
> Hope this helps
> Dr. John Cork,
> Calcium Imaging Facility
> Department of Anatomy, LSUMC,
> 1901 Perdido St., New Orleans
> LA 70112
>
> e-mail: [log in to unmask]
> tel: (504) 568 7059         FAX: (504) 568 4392
>

ATOM RSS1 RSS2