CONFOCALMICROSCOPY Archives

January 2023

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Brian Northan <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Tue, 17 Jan 2023 12:16:18 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (136 lines)
*****
To join or leave the confocal microscopy listserv or to change your email address, go to:
https://lists.umn.edu/cgi-bin/wa?SUBED1=confocalmicroscopy&A=1
Post images on http://www.imgur.com and include the link in your posting.
*****

Hi Lutz

Thanks for your candid reply.  Based on the marketing I saw I assumed
Thunder was perhaps a spatially varying 3D deconvolution done in blocks.
Sounds like it is much simpler.

At this point I've definitely learned to critically evaluate any claim I
hear about deconvolution.   As you mention, assessing what is
state-of-the-art is difficult, however some critical thinking and
(relatively) simple tests (simulations, bead images, visual inspection of
biological images) can be used to reject approaches as being state-of-art.

Brian


On Sun, Jan 15, 2023 at 10:16 AM <[log in to unmask]> wrote:

> *****
> To join or leave the confocal microscopy listserv or to change your email
> address, go to:
> https://lists.umn.edu/cgi-bin/wa?SUBED1=confocalmicroscopy&A=1
> Post images on http://www.imgur.com and include the link in your posting.
> *****
>
> Brian,
> The Leica Thunder is not state of the art deconvolution! In fact calling
> it so feels like an insult to anyone seriously developing algorithms and
> dealing with the subject matter. Thunder is nothing more (rather less) than
> a nearest neighbor subtraction (e.g. unsharp masking), as first published
> by Castleman around 1977. Obviously, there are improvements over this first
> publication as hinted in Leica's technical note, but in principle it is not
> based on a 3D forward model of image generation rather than a crude
> additive 2D one.
>
> Unfortunately among users with little background in the works behind the
> scene, the "pretty picture" aspect controls judgement, which seems
> consistent with Leica's successful business plan. I do not want to start
> accusations but for example claims on quantitative-ness on the result with
> respect to the sample can easily be faked by some form of adaptive scaling
> while Richardson-Lucy is inherently quantitative (number of input photons
> equals those in the result).
>
> To the main question in this tread: There are just too many algorithmic
> approaches today that all address different aspects on optimizing and
> solving this inverse problem so that there no single algorithm exists that
> can address all of them. The result will always be no more than an
> approximation to the sample. The metrics to compare results are often
> flawed (e.g. L1/L2 norm, structure index etc.) making quality assessment
> difficult. Of course one can always use a simulation of a sample, where the
> sample itself could be made to represent biological structures in various
> 3D orientations. The today often quoted using of a high NA objective
> observation as ground truth and comparing the deconvolution result of a low
> NA lens with that, does not seem a good idea in a formal sense since this
> "ground truth" already contains diffraction limited, anisotropic resolved
> data. This comparison method often shows "pleasing" results in recently
> published AI supported deconvolutions. Again, one should be critically
> question why this method was used instead of the classical simulation
> comparison. Maybe it just gave less attractive results? We only know if we
> indeed replicate the method and test it under the conditions we like to use
> it. These vary case by case...
>
>
> Thanks
> Lutz
>
>
> -----Original Message-----
> From: Confocal Microscopy List <[log in to unmask]> On
> Behalf Of Brian Northan
> Sent: Tuesday, December 20, 2022 8:13 AM
> To: [log in to unmask]
> Subject: What do people consider state-of-the-art for for 3D Deconvolution
> ?
>
> *****
> To join or leave the confocal microscopy listserv or to change your email
> address, go to:
> https://lists.umn.edu/cgi-bin/wa?SUBED1=confocalmicroscopy&A=1
> Post images on http://www.imgur.com and include the link in your posting.
> *****
>
> Hello list
>
> I recently looked at this paper on 'Richardson Lucy Network'  (RLN)  (
> https://www.nature.com/articles/s41592-022-01652-7?utm_source=twitter&utm_medium=social&utm_campaign=nmeth
> ).
>
> They compare performance of classical Richardson Lucy Deconvolution (RLD),
> Leica Thunder, and their Richardson Lucy Neural Network (RLN) for several
> image sets.
>
> I have not had time to read and understand the entire paper yet .  However
> I was very interested in figure 4b which shows a comparison of RLD, Thunder
> and RLN on a widefield image.  Any thoughts on this figure?
>
> It seems to me the classical Richardson Lucy result and perhaps the Thunder
> result may not be optimal.   The classical Richardson Lucy result looked to
> have too much remaining axial blur compared to my expectation of a result
> generated with a well-matched PSF.
>
> This brings up an important question.  In order to judge a new method, we
> must compare it to the previous state-of-the-art method.  But what is
> state-of-the-art for 3D Deconvolution ?
>
> Richardson Lucy results can be highly variable depending on how the PSF is
> measured/calculated and pre-processed, how the edges are handled, what type
> of regularization is used, whether iterations are accelerated and stopping
> criteria.  I assume the same is true of Thunder (though I am not familiar
> with the important parameters for Thunder).
>
> So comparisons between deconvolution techniques are very tricky to
> evaluate.  It's hard to interpret this comparison without knowing if all
> approaches were run (reasonably) close to optimally.   I can't help but
> think the results of the comparison could be different if experts on each
> technique generated each result.   What do others think?  And it brings us
> back to the question of what is state-of-the-art for 3D Deconvolution?
> And what is a fair way to determine that?
>
> (My own experience is as a signal processing programmer, implementing
> deconvolution for mostly high SNR applications and I've found classical
> Richardson Lucy, with very careful edge handling, still provides
> "competitive" results.   A good PSF is the most important factor, and often
> must be measured carefully.   Iteration acceleration is useful in some
> situations.   Occasionally I have used total variation regularization for
> low SNR, but otherwise I have limited experience as to what regularization
> and denoising approaches are considered state-of-the-art).
>
> Brian
>

ATOM RSS1 RSS2