CONFOCALMICROSCOPY Archives

April 2012

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Mark Cannell <[log in to unmask]>
Reply To:
Confocal Microscopy List <[log in to unmask]>
Date:
Mon, 16 Apr 2012 15:29:11 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (130 lines)
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

On 16/04/2012, at 1:36 PM, Guy Cox wrote:

> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
> 
> Mark,
> 
>              You are continuing to confuse the samples with the representation of those samples.
> 
> Let's imagine we have a series of data points:  100  90  80  70  60  50  40


NOOOOOOOO! They are NOT data points! The camera does not report points at all! You are making the common error of assuming the camera provides a DIrac comb filter. It does not.


> 
> We are mapping these on an image where each is separated by a defined distance.  So we need to fill in this distance.
> 
> You are saying that the 'correct' representation is:  
> 100  100  100  100  100   90   90   90   90   90   80   80   80   80   80   70   70   70   70    60   60   60   60   60   50   50   50   50   50   40  

Yes that is what it reported, the average from x to x=dx is a constant.

'Nuf said.

Cheers


Rest deleted for tedium
> 
> I am saying that this is a wildly implausible and totally unjustified interpretation, and the best representation we can derive from the data is:
> 100  98   96   94   92   90   88   86   84   82   80   78   76   74   72   70   68   66   64   62   60   58   56   54   52   50   48   46   44   42   40
> 
> EITHER way we are interpolating the sampled data - we have no option - so let's just get over this.   Your proposed representation includes detail that we could not possibly detect, mine does not.   Remember, these are SAMPLES.  Neither representation changes our recorded data.  End of story, IMHO.
> 
> How did we get into this mess?  Why does everyone then 'do it wrong'?  Well, actually, everyone doesn't.  Scanning probe microscopes always remap - because by the time they appeared the computing power to do it was available.  When confocal microscopes first became widely available, in 1987, the data they produced completely overwhelmed available computing power (believe me, I was there and writing software).   So we got used to the 'quick and dirty' approach.  Consumer digital cameras do a sort of remap because the Bayer mosaic requires it, but modern sensors so far exceed the resolution of the camera optics that we never get to see any spurious frequencies anyway.  Computer games consoles always remap.  So do X-ray and EM tomography systems.  
> 
> 						Guy (arrogant bastard)
> 
> 
> 
> ----Original Message-----
> From: Confocal Microscopy List [mailto:[log in to unmask]] On Behalf Of Mark Cannell
> Sent: Monday, 16 April 2012 7:37 PM
> To: [log in to unmask]
> Subject: Re: A pixel is not a little square
> 
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
> 
> Sorry Guy, I still think you don't see the point I'm trying to make. The camera actually says "The mean signal from x to x+dx is ..." (where dx is the sensor pixel size). It does NOT say the signal at x is 'K' and that is where I think the confusion lies. The camera output is a 2D 'histogram' and showing little boxes with the same intensity is (I say again) a perfectly accurate representation of the data ( i.e. F(x) for x -> x + dx = K). With respect, it is not, as you say inaccurate -even if it is unaesthetic. If you fit a sinusoid you have just carried out a fitting exercise... That is not a "more accurate" presentation of the data despite what your Smith says (even if it may be a more accurate representation of the object which has been discretized). One should not loose sight of the fact that you have made some (possibly large) assumptions in the fitting process.  
> 
> Put mathematically, if you smooth out the displayed pixel edges you extend the actual sampling frequency (note how you are putting new unrecorded samples between recorded data values  -which is what drawing a line between points actually does) -you are adding information to the data that was NOT present in the RAW data. It may be that your additional information is correct and adds value (e.g. the band limit of the microscope is...) but one should not loose sight of distinction between the addition of data/information by the experimenter (which may or may not be wrong) and that reported by the instrument (the closest to truth the experimenter can get). 
> 
> At the risk of boring some readers on this list, let me emphasize my point : The camera actually says "The mean signal from x to x+dx is ..." (where dx is the sensor pixel size). It does NOT say the signal at x is 'K' . This can be portrayed as a square with constant color and I can think of no other truer portrayal of the measured data.  Hopefully dx is less than the resolution of the viewer at final display resolution but if it is not, then the only choice (IMHO) is between aesthetics (or some other goal) and truthfully displaying the recorded data -there is no middle ground.
> 
> Cheers Mark
> 
> PS My CD player can't output square waves because the detector etc. has a rather finite bandwidth... Even if it could, my ears are too many dB down at 44 kHz to sample it correctly and hear the artifacts introduced by digital sampling ... :-)
> 
> On 16/04/2012, at 9:18 AM, Guy Cox wrote:
> 
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> *****
>> 
>> " There are _no_  'higher harmonics' present in the data, only in ones 'artistic' interpretation for display purposes."  That is exactly what I said!
>> 
>> I also never said that the data is a continuous function, I said it is a series of discrete samples of a continuous function.  So when you choose to display it you have to do something.  Drawing little boxes is NOT 'doing nothing' and neither is it 'displaying the raw data'.  On the contrary -  it is corrupting the data with frequencies which shouldn't be there AND confusing the human eye (for which, presumably, we are doing the drawing).  The raw numbers are useful - indeed essential - for the computer but fundamentally cannot just 'be displayed' to the human eye as an image.  Our sampling rationale is based on sine-wave frequencies and therefore, as Alvy Ray Smith said, sinusoidal mapping is the truest (not the most aesthetic, though this is also true) way of displaying the data.  It doesn't add any spurious higher harmonics, it presents the data as accurately as our sampling permits.  Drawing little boxes may be easier, but it is just as much mapping the measured samples to a displayed image - the difference is that this method is both inaccurate and un-aesthetic.  
>> 
>> If your CD player spat out square waves to the speakers, you'd take it back to the shop pretty promptly!
>> 
>>                                                                                                                                          Guy
>> 
>> -----Original Message-----
>> From: Confocal Microscopy List [mailto:[log in to unmask]] On Behalf Of Mark Cannell
>> Sent: Monday, 16 April 2012 5:27 PM
>> To: [log in to unmask]
>> Subject: Re: A pixel is not a little square
>> 
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> *****
>> 
>> I think I see the problem, the spurious frequencies arise from your thinking the _data_  is a continuous function and treating it as such (by "drawing a line ..."), but it is not, it  is discrete and can be faithfully represented by a _discrete_ Fourier transform (which folds at Fs/2). The hiighest frequency in the DFT is Fs, but we know we shouldn't look at that right?  There are _no_  'higher harmonics' present in the data, only in ones 'artistic' interpretation for display purposes.
>> 
>> If it looks jagged, that is because in reality sampled data really is!  The problem really arises because you do not know how to fill in the space between data samples.  You can interpolate (or not). If you interpolate you are making a statement about the model underlying the data and have just carried out a fitting exercise. Fitting is NOT raw data presentation. If you just plot data values you make no assumption about what should join the data, no model has been fit to the data. Every scientist should know the difference between a histogram and a continuous distribution and not be fooled by the vertical lines at the histogram boundaries (which is what you show in a pixel image).
>> 
>> The choice is yours, in one case you faithfully show unadulterated sampled data (the histogram looks less 'pretty' than a curve) or you fit a model and interpolate. The trouble with the latter is that the model is probably wrong and you hide the defects in the data (e.g. camera pixel size) from the keen eyed reviewer... Of course if the data points are really close together, the myopic reviewer can't see defects in you data :-) !  From  Guy's reasoning,  it would be impossible to represent any digitally sampled data because you are always pixelating a continuous function (all pictures get mad up of little squares -the printer dumps blobs of ink etc). So, where does the pixelation become acceptable? This is now aesthetic and has nothing to do with science or mathematics (those with perfect vision will always see discretization 'artifacts' more easily) . 
>> 
>> Cheers Mark
>> 
>> On 16/04/2012, at 3:31 AM, Guy Cox wrote:
>> 
>>> *****
>>> To join, leave or search the confocal microscopy listserv, go to:
>>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>> *****
>>> 
>>> OK, having slept on it, I now feel that just maybe I can explain what this is all about.  If only the list would let us include pictures it would be much easier!
>>> 
>>> Let's assume we have a digital image, from any source, consisting of pixels with a spacing s.  The smallest spacing we can resolve in this image is 2s, and this will correspond, in frequency space, with a frequency f.  f represents the bandpass limit of this system,  no higher frequencies can be passed.  Now imagine we have a row of pixels containing the following values:
>>> 
>>> 255  0  255  0  255  0  255  0  255
>>> 
>>> If we represent these pixels by little squares, we'll have something like a chessboard.  Taking a line along this chessboard will give us a square wave.  Now this square wave cannot be represented within the bandpass limit of the system, defined by the frequency f.  To represent a square wave we need an infinite series of sine waves f + 3f + 5f +7f .....    To get even a crude approximation to a square wave we need f + 3f - that is a frequency three times higher than the image can contain.    
>>> 
>>> In other words, we've introduced a whole series of spurious frequencies into our image that not only were not there to start with, they could not possibly have been there.   Does this matter?  After all, we know they can't be real.  It does matter, because we are talking about a visual representation of our data - that's why we drew the little boxes in the first place.  Our eyes are very sensitive to edges* and the edges will take over if we let these frequencies come within the bandwidth of our eyes.   We will find it very hard to actually see the finest detail in our picture (defined by 2s, remember) because if we enlarge it enough to see this easily we'll also get the edges created by these spurious frequencies.  In everyday terms, the pixellation takes over from the picture.  
>>> 
>>> Note that in all this discussion I have  not mentioned microscopes, cameras or anything - we are just talking about a digital image from any source.  It applies to confocal, widefield, and electron microscopes, telescopes, X-ray images and your holiday snaps.  Coming back to the microscopic world, if we oversample to the point where r, our minimum resolved distance, is substantially greater than 2s, we may not need to enlarge to the point where we see the spurious frequencies.  This is probably why some contributors to this discussion have advocated considerable levels of oversampling (though they probably didn't realise this, they just knew they got good pictures that way).  But oversampling in fluorescence can be very hard on our specimens.
>>> 
>>> "But I'm using a CCD detector so my image is made up of little squares".  Yes, you can produce a 'coloured in' picture of your detector that way.  I'm assuming the image is actually what you want to see, though, not the detector.
>>> 
>>> *Amusingly, the human eye does the same thing to emphasize edges as computer image processing does - it makes the dark side of the edge darker than it is and the light side lighter.
>>> 
>>>                                                                                                                                 Guy
>>> 
>>> PS.  This has doubtless confirmed my reputation among some people as an arrogant bastard.  They are probably right, but at least I'm an arrogant bastard who tries to help.  It's taken me two hours to write this.

ATOM RSS1 RSS2