http://imagej.273.s1.nabble.com/Re-question-regarding-Colocalization-Threshold-in-Fiji-Coloc2-tp3683647p3683654.html
> Hi Dan, Judith.
> I've just checked the two plugins that you were discussing. I used replicates of Manders' spot patterns that make these checks easy.
> I don't think that the difference seen is connected to thresholding.
> It looks to me as if "Colocalization threshold" calculates the variants of M1 and M2 described by Costes (2004) faithfully. JaCOP does not (although the thresholds seem reasonable).
Hmmmmm.... We will need to check the results given by Coloc2 (or whatever we end up calling it - any ideas)
compared to Jacop and colocalization threshold....
I hope we can make sure that the Coloc2 implementations of thresholded Manders 1 and 2 are really correct....
We could sure use those Manders spot patterns as part of our unit testing.
I can write unit tests that use these images and check that the result is what we expect it should be,
so that when we change things in the architecture and algorithms, we catch any bugs right away... rather than month or years down the line.
Since you published those patterns recently, can the community reuse them for this testing job?
Can we put them on the Fiji Wiki for example? Might need to ask the publishers if that ok... they are not evil, so i expect an OK.
I doubt Erik Manders still has the test pattern images from all those years ago... but he might prove me wrong!
One thing we have noticed is that for rather simple test images there can be numerical problems.
eg. if the means of the images are exactly the same, then the "fast" Pearsons method in the Colocalization threshold plugin fails.
but the "classic" implementation of the maths (direct translation from the Pearsons definition equation) still works.
This is one reason why scientists need to be able to see not only the name of the algorithm and equations in the docs describing it,
but ALSO exactly how the algorithm is implemented in the software. Else there is no way to know why such numerical problems crop up,
and no easy way to work around it or fix it oneself. (Yes, I am banging on about open software development again....)
Colocalization Threshold and JACOP are both open source, so we can see WHY there are any inconsistencies if there are any.
For closed source commercial software like imaris, volocity, huygens, colocalizer etc.
there is NO chance currently of looking at the source code... this makes it harder to do science, I think.
These algorithmic implementations in commercial software COULD be open sourced though...
especially if its not a unique algorithm (eg. Peasrson's r is very very standard)
so folks could see what is going on. But this seldom happens... even in things like MATLAB...
you cant really tell what is actually happening to your numbers, and why odd things sometimes happen.
A black box, is a black box. Why should you trust the results?
> I hope this helps.
> Regards,
> Andrew
>
>
>
>
> Andrew Barlow PhD | Applications Specialist
> PerkinElmer | For the Better
>
[hidden email]
> Phone: + 44 2476 692229| Fax: +44 2476 690091 | Mobile: +44 7799 795 999
> Millburn Hill Road, Coventry, CV4 7HS, UK
> www.perkinelmer.com
> Please consider the environment before printing this e-mail.
> This e-mail message and any attachments are confidential and proprietary to PerkinElmer, Inc. If you are not the intended recipient of this message, please inform the sender by replying to this email or sending a message to the sender and destroy the message and any attachments. Thank you.
>
>
>> -----Original Message-----
>> From: Daniel James White [mailto:
[hidden email]]
>> Sent: 17 February 2011 12:55
>> To: Judith Lacoste, Ph.D.
>> Cc: Image Processing Facility; LMF; fiji-users; fiji-devel; ImageJ
>> Interest Group; Barlow, Andrew; Jeremy Sanderson; Fabrice Cordelières;
>> Tom Kazimiers; Sylvain Costes;
[hidden email]
>> Subject: Re: question regarding Colocalization Threshold in Fiji -
>> Coloc2
>>
>> Dear Judith,
>>
>> On Feb 16, 2011, at 11:27 PM, Judith Lacoste, Ph.D. wrote:
>>
>>> Hi,
>>>
>>> First, all my congratulations for FIji, I enjoy it a lot and
>> recommend it all the time.
>>
>> We are very happy to hear that.
>> The Fiji project is very much about people like you - people that use
>> it, and spread the word to others.
>> We all really appreciate the active members of the Fiji community.
>>
>>> I have been comparing the Fiji colocalization Threshold and the Jacop
>> plugin (v.2) and I am a bit puzzled now.
>>
>> This is not surprising, sadly.
>>
>>> For the same images, the automatic threshold results are fairly
>> similar.
>>
>> Indeed the methods used are basically the same, but the implementations
>> do differ slightly, leading to slightly different thresholds in some
>> cases.
>> This is a problem, as it casts doubt over the method that should not be
>> in doubt actually, since it is pretty robust if you feed it sensible
>> input data.
>>
>> I strongly suggest you read:
>>
http://pacific.mpi-cbg.de/wiki/index.php/Colocalization_Analysis>> so you can understand the pitfalls and make sure you are doing the
>> "right thing"
>>
>>> However, the tM1 and tM2 values are 10 fold lower than the ones
>> calculated with the Jacop plugin.
>>
>> That could be because the thresholds are very slightly different.
>> Depending on the kind of data that the images contain... how strong or
>> weak any correlation is,
>> and if there is an uncorrected camera offset or pmt offset, and lots of
>> noise,
>> a small difference in thresholds can easily make a very large
>> difference in the values of tM1 and tM2
>>
>>> I did see the thread in the listserv last june about a bug to be
>> corrected and I guess it's been done since then. Any clues? Many
>> thanks for your help!
>>
>> We corrected a bug in the Colocalization Threshold plugin
>> that was pointed out by Andrew Barlow from Perkin Elmer (Volocity)
>> where the value of the Pearsons r for above the thresholds was
>> calculated incorrectly.
>>
>> There has been much discussion about wether Pearsons r above the auto
>> calculated "Costes" thresholds is actually a sensible
>> and interpretable statistic, since Pearsons r is used to calculate
>> where the threshold should be....
>>
>> see
>> Microscopy and Microanalysis (2010), 16: 710-724
>> Colocalization Analysis in Fluorescence Micrographs: Verification of a
>> More Accurate Calculation of Pearson's Correlation Coefficient
>>
>> This suggest a slightly different approach to get around this problem
>> and improve the "meaningfulness" of the Pearson's r
>> that does not use Pearson's r to calculate the thresholds in the first
>> place...
>> but then I still wonder exactly how one is supposed to then set the
>> thresholds in a robust, reproducible, objective manner.
>>
>> From that paper, here is a quote from the discussion:
>> "Our method of calculating thresholded PCC is not the same as the PCC
>> of pixels above the thresholds set by applications that implement the
>> approach of Costes et al. (2004) such as JACoP. Software implementing
>> the Costes' method often display Pearson's correlation coefficient for
>> two subsets of pixels, those with intensities above the thresholds and
>> those with intensities beneath (this differs from the original
>> implementation of Costes et al. in which only PCC for the subset of
>> pixels fainter than either threshold was calculated). This is done to
>> reassure users that the thresholds have been set in positions that
>> separate positively correlated intensities from uncorrelated
>> intensities, and to set thresholds that give objective measurements of
>> Costes' variants of M1 and M2. Given that PCC is used to determine
>> these thresholds, the same thresholds cannot then be used to determine
>> thresholded PCC objectively. Our approach requires thresholds that
>> separate signal from background to be set, and then thresholded PCC,
>> Mx, and My are generated using all pixels above both thresholds. While
>> the values calculated for pixels above the automatically set thresholds
>> are the same when calculated by JACoP and Volocity 5.3, the method of
>> setting the threshold for the calculation of thresholded PCC must
>> differ from the approach of Costes et al. in order to make an objective
>> measurement of thresholded PCC. Additionally, the algorithm of Costes
>> et al. (2004) is designed to set thresholds that separate positively
>> correlated pixels from uncorrelated or negatively correlated pixels.
>> Thresholded PCC, which requires thresholds that separate signal from
>> background, is effective at describing uncorrelated and negatively
>> correlated datasets."
>>
>> It seems there is still much theoretical work to be done on this
>> problem in order to find more robust solutions.
>>
>> Further, the differences between the JACOP and Colocalisation
>> Threshold plugin's implementation of Costes' method for
>> autothresholding
>> generates the confusion and uncertainty that you are currently
>> experiencing. Welcome to my world.
>>
>> In order to take the next step forward in dealing with these problems,
>> we have made a new Colocalization plugin - using new technology for
>> storing and processing image data (imglib)
>> and a modular design with test driven development (industry best
>> practice software development approach)
>>
>> We took the different algorithms, and make them self sufficient modules
>> of code that can be reused
>> and work on any bit depth of image data.
>> We built tests to make sure that the algorithms do the right thing when
>> we make changes to the code.
>> We made a prototype graphical user interface for driving the different
>> algorithms.
>> We started to design a standardized output style for coloc analysis,
>> with a standardized way of selecting which numbers to show and which
>> scatterplots and images to show,
>> and putting it all in a PDF file output.
>> This way everyone doing the same analysis get easily comparable
>> results,
>> that are already formatted and ready to go into supplemental info of a
>> paper.
>>
>> All this is to standardize the implementation of the maths of the
>> algorithms, so everyone gets the same answer,
>> and re ove the chaos of different folks nor understanding what each
>> other mean by colocalisation,
>> and arguing about what numbers and pictures to show and how.
>>
>> This prototype is working and usable and hopefully industrial strenght
>> (as opposed to little or ethan a hacked and buggy script) ,
>> and we called it Coloc2 ... while we find a better name.
>>
>> It does :
>> Peasrsons r
>> Costes Autothreshold
>> Costes Significance Test (block randomisation, (not just white noise)
>> and Peasrons)
>> Manders coefficients tM1 and tM2
>> Li Scatterplots and single global value.
>> 2D histogram / Scatterplot / cytofluorogram - plus regression line
>> Region of interest and Masks (3D)
>> Checking image for suitability for analysis\
>> detect problems that cause numerical problems for the algorithms
>> detect regression line intercept that is not close to zero.
>> and more.
>>
>> It will be published at some point, but first we will ask the community
>> for help in refining the graphical user interface and PDF output style
>> and also which algorithms and methods to use and exactly how.
>> The Coloc2 code should be easy to add new things/modules to, without
>> breaking other bits of it (modular design - test driven development)
>>
>> Soon it will be in the Fiji updater. I will announce it, and ask for
>> feedback from general users
>> and from the authors of existing coloc algorithms and plugins.
>>
>> You can see design ideas and discussion here:
>>
https://docs.google.com/View?id=df66rgc7_2dtqkv3dx>> If you want write access to that doc, just let me know your google
>> account ID.
>> This is the place where we can argue and discuss about the right way to
>> move forward.
>>
>> The aim is to standardize and make more robust the whole game of
>> colocalization analysis,
>> so people can speak the same language about it,
>> and get comparable results.
>> Then they can focus on the biology, instead of arguing about the
>> details of the plugins and different methods etc.
>>
>> Hope that stimulates some discussion....
>>
>> happy coloc analysis!
>>
>> Dan
>>
>>
>>
>>>
>>> Best regards,
>>>
>>> Judith
>>>
>>>
>>>
>>>
>>> --
>>> Judith Lacoste, Ph.D.
>>>
[hidden email]
>>> Cell.: 514.916.4674
>>>
>>> MIA CELLAVIE Inc.
>>> PO Box 192, STN Anjou
>>> Montréal QC H1K 4G6
>>> Tel.: 514.352.8547
>>> Fax: 514.352.5154
>>>
>>> <smallLogo.tif>
>>>
http://miacellavie.com>>>
>>
>> Dr. Daniel James White BSc. (Hons.) PhD
>> Senior Microscopist / Image Visualisation, Processing and Analysis
>> Light Microscopy and Image Processing Facilities
>> Max Planck Institute of Molecular Cell Biology and Genetics
>> Pfotenhauerstrasse 108
>> 01307 DRESDEN
>> Germany
>>
>> +49 (0)15114966933 (German Mobile)
>> +49 (0)351 210 2627 (Work phone at MPI-CBG)
>> +49 (0)351 210 1078 (Fax MPI-CBG LMF)
>>
>>
http://www.bioimagexd.net BioImageXD
>>
http://pacific.mpi-cbg.de Fiji - is just ImageJ (Batteries
>> Included)
>>
http://www.chalkie.org.uk Dan's Homepages
>>
https://ifn.mpi-cbg.de Dresden Imaging Facility Network
>> dan (at) chalkie.org.uk
>> ( white (at) mpi-cbg.de )
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
Dr. Daniel James White BSc. (Hons.) PhD