CONFOCALMICROSCOPY Archives

February 2013

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Thu, 28 Feb 2013 11:19:26 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (208 lines)
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Most digital filters (not median filters) are convolutions.  Many deconvolution systems are filters (for example Wiener filters).  Confocal and 4-pi images can be effectively deconvolved by an inverse filter.  There is no basis for drawing a line here, it is a continuum.  Nor can I support your idea that filters should only be applied on a 2D basis - a long time ago I showed that filters should be applied in as many dimensions as the dataset possessed.  G.C. Cox and Colin Sheppard, 1999  Appropriate Image Processing for Confocal Microscopy.  In: P.C. Cheng, P P Hwang, J L. Wu, G Wang & H Kim (eds) Focus on Multidimensional  Microscopy.  World Scientific Publishing, Singapore, New Jersey, London & Hong Kong.  Volume 2, pp 42-54  ISBN 981-02-3992-0.

Applying filters plane by plane gave hugely worse results.  Also, this paper showed that sampling slightly above Nyquist (3 pixels per resel) and then median filtering with a minimal (circular, or face contact only) kernel reduced noise very effectively while not impacting at all on resolution.  So it cannot be throwing away information.  Isn't that what we need?

                                                                  Guy

>Jim,
>
>      OK, we are probably going to come to blows over this.  I just 
>trust the buffer of the Pacific Ocean between us.  The term 'filter'
>applied to digital operations is a bit unfortunate.  An optical filter 
>removes light according to its specification.  A digital, so called, 
>filter does nothing of the sort.  It processes pixels according to the 
>values of other pixels.  Deconvolution does EXACTLY the same thing - 
>just with a more sophisticated algorithm.
>Fundamentally there is no difference.  I really wish the term 'filter' 
>had never been used in the digital world.
>
>                           Guy



Well, not quite blows. And I agree that "filtering" and "deconvolution" do have some similarities.

But I would like to point out the following:

The rationale for deconvolution is that, to the extent that one can mathematically model the blurring effect of an imaging system as a convolution, one should be able to reduce its blurring effect by deconvolving the raw data. The one assumption is that the array of point emitters in the specimen are blurred the the same PSF to produce the blurred data that we detect.

In the case of deconvolving 3D microscope data, the main limitations on this process are image noise (Poisson, as well as others) and the possibility that the PSF is not perfectly known and may not remain constant over the sampled volume.

Therefore, the assumptions that are put into any acceptable spatial deconvolution system should be traceable to verifiable measurements of, for instance, the optical and sampling parameters being used. 
Deconvolving in time should be based on knowledge of how the fluorescent signal is expected to change with time: the simplest version being that it doesn't change during the acquisition period.

By contrast, the strongest support that I have heard for using, for instance, a particular median filter is that it makes the image somehow look better by suppressing occasional bright pixels. As far as I know, one doesn't even have to input any sampling/PSF data although the effects of such filters obviously vary with spatial frequency. Therefore, we don't even know the size of the bright pixel in real space.

Though the images that result from some filters may resemble those produced by some deconvolution procedures, I feel that the former inspires less confidence than the latter.

As a compromise I have suggested in the past that, as the process by which the microscope "convolves" structural data is a 3D process, we restrict the use of the term deconvolution to 3D data sets while procedures that are applied to only a single plane of data (at any one time) be called filters.

Cheers,

Jim Pawley

>-----Original Message-----
>From: Confocal Microscopy List
>[mailto:[log in to unmask]] On Behalf Of James Pawley
>Sent: Wednesday, 27 February 2013 3:45 AM
>To: [log in to unmask]
>Subject: Re: median filtering confocal microscope data at the 
>instrument
>
>*****
>To join, leave or search the confocal microscopy listserv, go to:
>http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>*****
>
>Hi all,
>
>It seems that we are discussing the best ways of eliminating the 
>effects of what are sometimes called "single-pixel" noise events.
>Although it is fair to ask "What other kind of noise is there?", the 
>term is often used to refer to pixels with recorded intensity values 
>that are "unreasonably large" and seem to have nothing to do with the 
>presence of dye molecules at a certain location in the specimen. Such 
>values can come from a number of possible sources: cosmic rays pass 
>through the photocathode every few seconds; alpha particles from 
>radioactive elements in the PMT somewhat more often. If the PMT is used 
>at high gain, as is often the case when looking at living specimens and 
>signal levels must be kept low, single-photoelectron dark counts may 
>produce fast pulses from the PMT or EM-CCD that approach the size of 
>those representing signal in a "stained" pixel.
>As these signals seem obvious artifacts when viewed by eye, it would be 
>convenient if they could be removed automatically.
>
>By definition, filters take things out. That is both their aim and 
>their curse. One can argue for hours on whether or not the resulting 
>data is better or not. In addition, filters are very fast. However, as 
>computers get ever faster and cheaper one would expect that this 
>advantage would become less important.
>
>In contrast to filters, deconvolution puts things in. Traditionally, it 
>reimposes the limits known to have been placed on the data by the 
>optics used to obtain it. Single-pixel events are "impossible"
>because, assuming that Nyquist has been satisfied, the smallest "real" 
>feature in the data should be at least 4 pixels wide (or 12-16 pixels 
>in area, 50-100 voxels in volume), not one pixel.
>
>Because the spatial frequency of a noise pulse singularity is at least 
>4x higher than that of the highest spatial frequency that the optical 
>system is capable of having transmitted, the offending value can be 
>tagged and then either replaced or averaged down. Indeed, some EM-CCDs 
>can now be set to detect and remove single-pixel noise based on this 
>recognition.
>
>More generally however, the most reliable and robust method of 
>obtaining the most accurate pixel-intensity information from a series 
>of sequentially-obtained data sets is to deconvolve them in time as 
>well as space. This just means that we put into the process not just 
>the PSF (which sets the limits on possible spatial frequencies) but 
>also our knowledge that real changes in specimen brightness can only 
>occur so fast and not any faster.
>
>George has postulated a series of intensity measurements from a single 
>pixel. Depending on the time delay between the measurements in this 
>series, we may (or may not) be justified in assuming that no real 
>change in the specimen could justify a sudden, 100x intensity change 
>that is only one scan-time in duration. Again, this allows us to tag 
>outliers and then dispose of them either by replacement or averaging.
>
>So much for what can now be conveniently  done using computers.
>
>Let us not forget that every effort should also be taken to reduce the 
>number of single-pixel anomalies present in the raw data to begin with. 
>With the PMT, this means keeping the photocathode cool and small and 
>monitoring its no-real-signal output over time (i.e., dark count rate). 
>Store a reference image of a single scan with all the lasers and room 
>lights turned off, and look for changes in its general appearance as 
>the weeks pass. More quantitative measures are also wise. (And while 
>you are at it, compare this zero-light result with one obtained when 
>the level of room illumination present is similar to that which you use 
>when actually collecting data. Stray light is often a more serious 
>problem than we we expect.)
>
>When employing an EM-CCD, a similar no-signal image can be used to 
>assess changes in dark-count and coupling-induced charge over time.
>These may slowly drift up over time (months) and are always very 
>sensitive to chip temperature.
>
>I am less familiar with the anomalous, single-pixel behavior of sCMOS 
>cameras, but I would guess that, with the exception of hot pixels, they 
>are less common simply because charge amplification is not involved and 
>events associated with the emergence of a single, errant photoelectron 
>cannot be seen above the general read-noise level. As hot-pixels tend 
>to reoccur at the same exact location in the image, they can be 
>automatically identified and averaged out using data from their 4 or 8 
>nearest neighbours.
>
>In all these cases the most efficient way of removing single-pixel 
>anomalies from your final data is to take all the precautions needed to 
>prevent them from occurring.
>
>Sorry for being so long-winded.
>
>Jim Pawley
>
>
>>*****
>>To join, leave or search the confocal microscopy listserv, go to:
>>http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>  >*****
>>
>>Hi George,
>>
>>On Mon, 25 Feb 2013, George McNamara wrote:
>>
>>>   So if a single pixel scanned five times has values of  40, 38, 42, 3800,
>>>   37, (not necessarily acquired in that order), you would prefer the
>>>   arithmetic mean 791.4 (not that any of the vendors can give you the 0.4,
>>>   and for anyone who is a fan of Kalman, the Kalman value would depend on
>>>   which order the five were acquired in, rather than 40 (which is the
>>>   "digital offset" I usually use on the Zeiss LSM710 I manage when
>>>   operating in 12-bit mode).
>>
>>Please do not pretend that I said that. When I talked about "linear"
>>filters, I did not imply a simple averaging.
>>
>>>   Pesonally, I would like to see the point scanning confocal 
>>> microscope
>  >>  (and EMCCD software) vendors implement median and even more PMT 
> and
>>>   similar noisy data appropriate methods to provide the best possible data
>>>   to my users and I (and as of April: my colleagues and I at MDACC,
>>>   Houston).
>>
>>What data would be best recorded depends highly on the application. In 
>>the general case, recording the values 40, 38, 42, 3800 and 37 in your 
>>above example would be better than recording just "40". But recording 
>>more than one value per pixel is often not practical.
>>
>>To reiterate: The Median filter *can* be the optimal filter. You 
>>should just not go around and tell everybody that it *is* the optimal 
>>filter, because it certainly is not. And in particular when you want 
>>to quantify your data after acquisition, it is inappropriate to use 
>>the Median filter
>>(remember: a filter should not be applied just because the processed 
>>image "looks good", but it should only be applied if it helps the analysis).
>>
>>That is all I said. (I certainly did not claim that you should always 
>>take a simple arithmetic mean. I am not that stupid.)
>>
>>Ciao,
>>Johannes
>
>
>--
>James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, 
>BC, Canada, V0N3A0, Phone 604-885-0840, email <[log in to unmask]> NEW! 
>NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146


--
James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, BC, Canada, V0N3A0, Phone 604-885-0840, email <[log in to unmask]> NEW! NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146

ATOM RSS1 RSS2