CONFOCALMICROSCOPY Archives

May 1996

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Paul Goodwin <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Fri, 31 May 1996 13:23:02 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (135 lines)
Yes, frame averaging helps. S/N is proportional to the SQRT of the number
of frames averaged if the noise is random. The problem is that this then
greatly effects the viability and photobleaching of the samples. If you
can afford the time and photoeffects, its a good idea.
 
________________________________________________________________________________
 
 
Paul Goodwin
Image Analysis Lab
FHCRC, Seattle, WA
 
On Fri, 31 May 1996, Paulette Brunner wrote:
 
> I would be interested to know how you are collecting the confocal
> images...someone else told me confocal images had too much shot noise to
> deconvolve but it turned out they were only averaging over three frames.
>
> Paulette Brunner
>
>
> On Fri, 31 May 1996, Paul Goodwin wrote:
>
> > The problem with deconvolving confocal images is that the signal-to-noise
> > is generally poor so that the noise, which can look like small points to
> > the deconvolution algorithm tends to get enhanced as well. There are
> > methods that have reported good success with this (the blind
> > deconvolution people in Holland) and we are working at ways of masking
> > the noise frequencies out of our images and OTF's. So far the image is
> > improved, i.e., there is resolution extension, but I would be happier if
> > the noise was less.
> >
> > ________________________________________________________________________________
> >
> >
> > Paul Goodwin
> > Image Analysis Lab
> > FHCRC, Seattle, WA
> >
> > On Fri, 31 May 1996, David Knecht wrote:
> >
> > > I would like to add a question to this fascinating discussion.  What about
> > > the deconvolution of confocal images?  Is this superior to either
> > > independently?  Dave
> > >
> > > >The problem is that if you take a small bead in a blank field and image
> > > >through it with a good wide-field imaging system and look at the total
> > > >light (intergrated or average intensity for the field) and the peak
> > > >intensity for the field as you focus through the bead, the peak intensity
> > > >climbs as you would expect as you maximally focus on the bead. What is
> > > >not realized by most people is that the mean intensity stays virtually
> > > >the same throughout the focus series. That bead contributed alot of its
> > > >light to the adjacent fields (that's why you confocal in the first place,
> > > >if the contribution to the adjacent fields was small, we wouldn't have to
> > > >worry about optical sectioning). In deconvolution, a calculation is made,
> > > >using one of a few methods, of what points are creating the overall
> > > >intensities in the image set and it attempts to create an image that
> > > >optimally represents the "true" source of the intensities in the original
> > > >data set. In most cases that represent the images that most biologists
> > > >use, it is pretty straight forward to do this. Tests of the ability of the
> > > >Deltavision system from API that we use show us that this system does a
> > > >superior job of creating an image stack whose intensities can be
> > > >validated by an independent method. In our case, this is measuring the
> > > >intensities of beads in a 3D matrix and testing to see how well we can
> > > >estimate the integrated bead intensity as compared to flow. The DV system
> > > >does a real good job at this particular tests. Your mileage may vary.....
> > > >
> > > >The net effect is that the inclusion of the out of focus light gives us a
> > > >better estimate of true intensity than cutting off the out of focus light
> > > >either with a confocal or by nearest-neighbor algorithms. Likewise, the
> > > >assumption of Sedat and Agard and Fay and others is that there is
> > > >resolution information in the 3-D airy disk that can be garnered through
> > > >resolution extension in deconvolution that gets lost in the confocal
> > > >pinhole.
> > > >
> > > >Flame on!
> > > >
> > > >___________________________________________________________________________
> > > >_____
> > > >
> > > >
> > > >Paul Goodwin
> > > >Image Analysis Lab
> > > >FHCRC, Seattle, WA
> > > >
> > > >On Fri, 31 May 1996, Guy Cox wrote:
> > > >
> > > >> Jennifer Kramer wrote:
> > > >>
> > > >> >One important point here not mentioned is the presence of the pinhole or
> > > >> >slit in the back focal plane of the objective lens.
> > > >> >This can exclude up to 99% of all emitted light, requiring use of high
> > > >> >probe concentrations (which can be potentially toxic).
> > > >>
> > > >> This sort of statement always mmakes me see red!  The only light the
> > > >> confocal pinhole excludes is the *out of focus* light!!  The Airy
> > > >> disk at the point of focus more or less all goes into the final
> > > >> image (depending on the pinhole setting).
> > > >>
> > > >> The reason for the better light budget of deconvolution systems has
> > > >> nothing to do with the pinhole and everything to do with serial vs
> > > >> parallel collection of images.  In terms of light budget, a one
> > > >> second scan of a confocal image (collecting 256K pixels) is equivalent
> > > >> to a 4 microsecond exposure of a widefield image with the same
> > > >> light intensity.  Looking at it the other way round, a single
> > > >> widefield video frame (1/25 sec) is equivalent to scanning a
> > > >> confocal image for 10,000 seconds!!
> > > >>
> > > >> Of course in confocal we try to overcome this by using brighter
> > > >> light (a laser) but because fluorochromes saturate it isn't a total
> > > >> solution.  And if we crank up the light beyond the saturation point
> > > >> we get severe bleaching, as Jennifer points out.
> > > >>
> > > >> So, yes, deconvolution does have a much better light budget, but
> > > >> NOT because "we are throwing away 99% of the light" in confocal.
> > > >> On the other side of the coin, confocal is (to a very rough
> > > >> approximation) object independent in its 3D imaging capability.
> > > >> Deconvolution varies from quite good (isolated point objects)
> > > >> to completely ineffective (very extended objects).  As with
> > > >> everything else in this game, we have to trade off one desirable
> > > >> property to do better on another!
> > > >>
> > > >>                                         Guy Cox
> > > >>
> > >
> > > Dr. David Knecht
> > > Department of Molecular and Cell Biology
> > > University of Connecticut
> > > U-125
> > > Storrs, CT 06269
> > > [log in to unmask]
> > >
> >
>

ATOM RSS1 RSS2