CONFOCALMICROSCOPY Archives

July 2008

CONFOCALMICROSCOPY@LISTS.UMN.EDU

Options: Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
TEXT/PLAIN; charset=US-ASCII; format=flowed
Sender:
Confocal Microscopy List <[log in to unmask]>
Subject:
From:
Bill Oliver <[log in to unmask]>
Date:
Wed, 2 Jul 2008 09:50:08 -0400
In-Reply-To:
MIME-Version:
1.0
Reply-To:
Confocal Microscopy List <[log in to unmask]>
Parts/Attachments:
TEXT/PLAIN (22 lines)
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal

On Tue, 1 Jul 2008, Larry Tague wrote:

>
> Yes, this diatribe strays somewhat from the original image manipulation 
> question, but if there is no data to check or continue using, how could you 
> possibly know if an improper image analysis had been applied.  Even if rules 
> for image analysis exist, there is no good way to be sure mistakes were not 
> made... especially when there are questions post publication and no raw data 
> to check. Cheers!
>

The traditional way that research is validated is by reproducibility rather than by combing through raw data.  The bottom line is that if "ethics" is a problem, then there's nothing to stop someone from faking the data altogether.  Further, many mistakes are not those that will lie in being able to scrutinize the images, but in the physical process of doing the experiment.  In the cases of experiments-gone-bad that I am familiar with, the errors were not in recording the data, but in the execution -- a poorly calibrated water bath, a mislabeled specimen, etc.

I kow of one study, for instance, where the data was way off base because an operator simply didn't know how to operate an oscilloscope.  The only way to figure out the error (by reproducing the results), however, was to redo the experiment.  When you did that, it was clear that the only way to get the data that was reported was by incorrectly setting the gain at one point in the process.  You couldn't see that by looking at the data itself -- the data were accurately reported.

So I would suggest that the best way to see if mistakes were made is the traditional way -- reproducibility.

billo

ATOM RSS1 RSS2