CONFOCALMICROSCOPY Archives

April 1998

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
Confocal Microscopy List <[log in to unmask]>
Date:
Tue, 21 Apr 1998 13:05:38 -0400
Reply-To:
Subject:
MIME-Version:
1.0
Content-Transfer-Encoding:
7bit
In-Reply-To:
Content-Type:
text/plain; charset="iso-8859-1"
From:
Ted Inoue <[log in to unmask]>
Parts/Attachments:
text/plain (119 lines)
I have to add my 4 cents to the discussion.

The assertions that more bits adds nothing if:
a) the noise / data quality limits the signal
b) the eye can't even see 255 levels

does not take into account the fact that software packages behave
differently depending on the bit depth of the source data. Nor does it take
into consideration some of the statistics.

Let me give a gross example. Suppose you have an object with a true average
brightness of 8 arbitrary units. When you sample this at low light levels,
you might measure values of 7, 11, 9, etc. - values distributed around the
mean based on standard noise statistics.

Now, take the case that you have a digitizer that can only digitize the
values 0, 5, 10, 15. When it looks at the values coming form the detector,
it might quantize them so that:
7 becomes 5
11 becomes 10
9 becomes 5

On the other hand, if you had a digitizer with more bits, then 7 remains 7,
11 remains 11 and 9 remains 9. Your averages will be "better" in the latter
case.

Take a second example. Suppose your image processor takes 8 bit images and
computes 8 bit results for various image processing algorithms like
sharpening and image averaging. Even if intermediate results are computed to
higher accuracy, the resultant data gets scaled back to 8 bits, losing any
potential addition of accuracy. On the other hand, if the image is stored in
16 bits, and the processor works with this data storing results as 16 bits,
you have the potential for much better image results.

This might seem counterintuitive to the statisticians out there. However try
it. Take a source image which is 8 bits. Multiply it by 100 and create a
16-bit image (which is technically exactly the same precision as the source,
8 bit image). Now do some computations and check out the results. The scaled
8 bit image will be significantly more accurate in the results than the
original because the computations can then in essence use the added bits to
store the fractional part of the answer.

Here's a really simple example of this. You want to find the average of two
values, 1 and 2. On the original data, in an 8-bit system, you add them to
get 3 then divide by 2 to get...1!
With the scaled version (where you multiply the source data by 100 first),
you add 100 and 200 to get 300. Then divide by two to get 150.

I've heard arguments against this approach indicating that it's like empty
magnification, that you're not really adding accuracy. But in actual
practice, I get substantially better qualitative results by using 16-bit
images, even when the original source data are only 8 bits. This applies for
image sharpening, averaging, subtraction: every image processing function
I've used benefits greatly from working in the 16-bit domain in this manner.

-Ted Inoue

-----Original Message-----
From: Confocal Microscopy List
[mailto:[log in to unmask]]On Behalf Of Johannes Helm
Sent: Tuesday, April 21, 1998 11:58 AM
To: [log in to unmask]
Subject: Re: Buying a new confocal -Reply


At 10.19 fm 98-04-21 -0400, Jeff Reece wrote:
>Dear Martin:
>
>I think using the words "virtually never" to describe the occurence of
>12-bit confocal S/N is certainly appropriate if the manufacturers refuse
>to give users this option.

Good afternoon.
I should like to comment on this issue from a technological point of view.

I sometimes met the attitude amongst life-scientists "The more bits-the
better". This simple relation is, unfortunately, often wrong.
We've studied this problem by computer simulation as related to
quantitative CSLM measurements applying ratio-imaging dyes and published as
a "further result" in Cell Calcium 22(4):287-298, 1997. To tell it in a
nutshell: As long as the number of detected photons per pixel is less than
256, it does in principle not make any sense to have more than 8 bits
(since an eight-bit value, i.e. a one-byte integer, defined as an unsigned
char can max. be 255 = pow(2.0,8.0) -1 ). Also, the human eye is, as far as
I know, on the "limit of its specification" with 256 grey-values (hence,
b/w screens usually display grey values 0-255). If, however, the numbers of
detected photons per pixel exceed 255, and if the detected pixel intensity
values are converted to floats or even doubles BEFORE any further
processing of the raw-data images is done, even an algorithm as
noise-enhancing as the ratio-imaging algorithms does NOT depend in any
noticeable way on whether the raw-data images have been detected digitizing
into 8bits, 12bits, 16bits, a.s.o.

Best regards,

Johannes Helm
--
********************************************************
Paul Johannes Helm

Mail Address:           Institute of Basic Medical Sciences
                        Department of Anatomy
                        P.O. Box 1105 - Blindern
                        N-0317 Oslo
                        Norway

Visiting Address:       Institute of Basic Medical Sciences
                        Department of Anatomy
                        Sogsnvannsveien 9 / 0245
                        N-0372 Oslo
                        Norway

Voice:                  +47 22851159
Fax:                    +47 22851278
Email:                  [log in to unmask]
WWW:                    http://www.uio.no/~jhelm

********************************************************

ATOM RSS1 RSS2