Recovering the (static) background of a series of timelapse images

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Recovering the (static) background of a series of timelapse images

TheoRK
Hi ImageJ List,

I am kind of desperate, since I do not even know the right words to search with and it seems ImageJ does not seem to have this function by default:

I want to extract a background image from a series of timelapse images, to substract the quite uneven but not moving background. I tried remove background (the rolling ball one), I tried the ROI background substractor. Both get rid of some of the noise, but not of the things that where somewhere out of focus and are not moving, but messing up the picture. The microscope has a shading correction (and I know ImageJ can do that too), but I have to extract an image that only contains the non-moving parts first...

Can't be that I am the first person that tried that before?
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

Gabriel Landini
On Thursday 14 Mar 2013 17:39:55 TheoRK wrote:
> I want to extract a background image from a series of timelapse images, to
> substract the quite uneven but not moving background. I tried remove
> background (the rolling ball one), I tried the ROI background substractor.
> Both get rid of some of the noise, but not of the things that where
> somewhere out of focus and are not moving, but messing up the picture. The
> microscope has a shading correction (and I know ImageJ can do that too), but
> I have to extract an image that only contains the non-moving parts first...

Depending on the image sequence there might be a way of doing this, but unless
you show what it looks like it is not possible to say.
For example, if you had a sequence of ants walking on a white surface and
enough frames, perhaps doing a max or median projection might provide a way of
obtaining the background without the ants.
Cheers
Gabriel

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

Joel Sheffield
If the background is a constant, why not try to do what Gabriel suggests.
 See if you can do an average or a mean of your stack. You should then take
the resulting single image, invert the contrast, and add it to each of the
images in your stack.

Joel

On Thursday, March 14, 2013, Gabriel Landini <[hidden email]> wrote:
> On Thursday 14 Mar 2013 17:39:55 TheoRK wrote:
>> I want to extract a background image from a series of timelapse images,
to
>> substract the quite uneven but not moving background. I tried remove
>> background (the rolling ball one), I tried the ROI background
substractor.
>> Both get rid of some of the noise, but not of the things that where
>> somewhere out of focus and are not moving, but messing up the picture.
The
>> microscope has a shading correction (and I know ImageJ can do that too),
but
>> I have to extract an image that only contains the non-moving parts
first...
>
> Depending on the image sequence there might be a way of doing this, but
unless
> you show what it looks like it is not possible to say.
> For example, if you had a sequence of ants walking on a white surface and
> enough frames, perhaps doing a max or median projection might provide a
way of
> obtaining the background without the ants.
> Cheers
> Gabriel
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--


Joel B. Sheffield, Ph.D
Department of Biology
Temple University
Philadelphia, PA 19122
Voice: 215 204 8839
e-mail: [hidden email]
URL:  http://astro.temple.edu/~jbs

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

fabrice senger-2
Hum !
When doing ICS (image correlation spectroscopy) you have tools to remove
the immobile fraction in your acquisitions this might be what you're
looking for ?

Fabrice

2013/3/15 JOEL B. SHEFFIELD <[hidden email]>

> If the background is a constant, why not try to do what Gabriel suggests.
>  See if you can do an average or a mean of your stack. You should then take
> the resulting single image, invert the contrast, and add it to each of the
> images in your stack.
>
> Joel
>
> On Thursday, March 14, 2013, Gabriel Landini <[hidden email]> wrote:
> > On Thursday 14 Mar 2013 17:39:55 TheoRK wrote:
> >> I want to extract a background image from a series of timelapse images,
> to
> >> substract the quite uneven but not moving background. I tried remove
> >> background (the rolling ball one), I tried the ROI background
> substractor.
> >> Both get rid of some of the noise, but not of the things that where
> >> somewhere out of focus and are not moving, but messing up the picture.
> The
> >> microscope has a shading correction (and I know ImageJ can do that too),
> but
> >> I have to extract an image that only contains the non-moving parts
> first...
> >
> > Depending on the image sequence there might be a way of doing this, but
> unless
> > you show what it looks like it is not possible to say.
> > For example, if you had a sequence of ants walking on a white surface and
> > enough frames, perhaps doing a max or median projection might provide a
> way of
> > obtaining the background without the ants.
> > Cheers
> > Gabriel
> >
> > --
> > ImageJ mailing list: http://imagej.nih.gov/ij/list.html
> >
>
> --
>
>
> Joel B. Sheffield, Ph.D
> Department of Biology
> Temple University
> Philadelphia, PA 19122
> Voice: 215 204 8839
> e-mail: [hidden email]
> URL:  http://astro.temple.edu/~jbs
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

perrine
In reply to this post by TheoRK
Hi TheoRK,
you may want to try this the Hullkground plugin developped by Anatole Chessel in Inria/institut Curie.
It will separate background from mobile part in 2D time lapse movie, taking also into account any bleaching over time and will return both mobile and immobile part.
http://serpico.rennes.inria.fr/doku.php?id=software:hullkground:hullkground

You can download the imageJ/Fiji binaries  from here:
http://serpico.rennes.inria.fr/serpico_software/registration.php?filename=hullkground-0.21-imagej.tar.gz
(you will have to unzip untar and look for the two .class in release directory, not very convenient I agree but very powerful tool).

All the best,
Perrine

TheoRK wrote
I want to extract a background image from a series of timelapse images, to substract the quite uneven but not moving background. I tried remove background (the rolling ball one), I tried the ROI background substractor. Both get rid of some of the noise, but not of the things that where somewhere out of focus and are not moving, but messing up the picture. The microscope has a shading correction (and I know ImageJ can do that too), but I have to extract an image that only contains the non-moving parts first...
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

dscho
Hi Perrine,

On Sat, 16 Mar 2013, perrine wrote:

> You can download the imageJ/Fiji binaries  from here:
> http://serpico.rennes.inria.fr/serpico_software/registration.php?filename=hullkground-0.21-imagej.tar.gz
> (you will have to unzip untar and look for the two .class in release
> directory, not very convenient I agree but very powerful tool).

It is not only inconvenient. By forcing the downloader to accept a license
that disallows modification of the software, it completely disagrees with
the core scientific principles.

Hence I would suggest not to use it until the time when the software is
available under a license compatible with science.

Ciao,
Johannes

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

perrine
Hi Johannes

First of all, good news: reading the license text learns you that "you have
the right to reproduce, to adapt, to modify, to integrate, to translate the
Software, only when such reproduction, adaptation, modification,
integration or translation is necessary for the use of the Software for the
Purpose ;" It is true that INRIA want to keep an eye on the distribution of
their code, and I can understand if you disaprove. In any case there is
also a publication, algorithm is not a secret so anyone can reimplement it.

But this license is by the way more flexible that the java license (from
Oracle) which does not allow  any modification of the java core as stated
in license associated with ImageJ distributed JRE.
I know there is also open projects to get a java under GPL (which is I
guess the definition a a science compatible license) but in the mean time
why not using what is available?

However this is out of topic discussion, I hope to have the occasion to
discuss the interesting point you raise in some meeting or in a specific
topic for it (or you can put the link for the topic if it already exists:
why GPL agrees with core scientific principles and other licenses do not).

Cheers,
Perrine

2013/3/16 Johannes Schindelin <[hidden email]>

> Hi Perrine,
>
> On Sat, 16 Mar 2013, perrine wrote:
>
> > You can download the imageJ/Fiji binaries  from here:
> >
> http://serpico.rennes.inria.fr/serpico_software/registration.php?filename=hullkground-0.21-imagej.tar.gz
> > (you will have to unzip untar and look for the two .class in release
> > directory, not very convenient I agree but very powerful tool).
>
> It is not only inconvenient. By forcing the downloader to accept a license
> that disallows modification of the software, it completely disagrees with
> the core scientific principles.
>
> Hence I would suggest not to use it until the time when the software is
> available under a license compatible with science.
>
> Ciao,
> Johannes
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

dscho
Hi Perrine,

On Sat, 16 Mar 2013, Perrine Paul wrote:

> 2013/3/16 Johannes Schindelin <[hidden email]>
>
> > On Sat, 16 Mar 2013, perrine wrote:
> >
> > > You can download the imageJ/Fiji binaries  from here:
> > >
> > http://serpico.rennes.inria.fr/serpico_software/registration.php?filename=hullkground-0.21-imagej.tar.gz
> > > (you will have to unzip untar and look for the two .class in release
> > > directory, not very convenient I agree but very powerful tool).
> >
> > It is not only inconvenient. By forcing the downloader to accept a
> > license that disallows modification of the software, it completely
> > disagrees with the core scientific principles.
> >
> > Hence I would suggest not to use it until the time when the software
> > is available under a license compatible with science.
>
> First of all, good news: reading the license text learns you that "you
> have the right to reproduce, to adapt, to modify, to integrate, to
> translate the Software, only when such reproduction, adaptation,
> modification, integration or translation is necessary for the use of the
> Software for the Purpose ;"

When you ask to get access to the software, you will get a mail that
contains this text before the link:

        Thank you for downloading Hullkground, before download, please
        remember that the software you are downloading is licenced. The main
        point of the licence are:- You can not use the software for
        commercial use.- You can not modify the software.- You can not
        distribute the software.- You have to cite the software's authors in
        every publication in which you use the software results as
        follow:"Results obtained in using HULLKGROUND Inria 2009, described
        in the following publication: "A. Chessel, B. Cinquin, S. Bardin, J.
        Salamero, Ch. Kervrann. COMPUTATIONAL GEOMETRY-BASED SCALE-SPACE AND
        MODAL IMAGE DECOMPOSITION: application to light video-microscopy
        imaging. In Conf. on Scale Space and Variational Methods (SSVM'09),
        Pages 770-781, Voss, Norway, 2009"

Apart from horrible formatting, this sentence sticks out (I just repeat it
because it might be hard to find because of formatting):

        You can not modify the software.

> It is true that INRIA want to keep an eye on the distribution of their
> code, and I can understand if you disaprove.

I certainly do not disapprove of keeping an eye on the distribution of
scientific software! After all, we all have to prove that our research
proves useful in order to get funded.

> In any case there is also a publication, algorithm is not a secret so
> anyone can reimplement it.

Science is about reproducability. That is why you are expected to ship
your, say, fly mutants to any interested lab, free of cost, after
publishing a finding about said mutant. Note: you are expected to ship
live animals, not instructions how to make a vector and how to get it into
your particular wild-type flies.

If you now look at scientific software, the situation is very sad. If
life science worked that way, in above example nobody would ship you fly
mutants. You would get either a dead mutant to sequence for yourself or
you would get a pointer to a paper that has the sequence (but that
sequence contains errors because nobody ever tried to make the vector from
said description).

This is not a good way to create knowledge!

> But this license is by the way more flexible that the java license (from
> Oracle) which does not allow  any modification of the java core as stated
> in license associated with ImageJ distributed JRE.

Very sorry, I fail to see how Java, being a commercial product by a
commercially-oriented company is related to what I said about scientific
culture.

> However this is out of topic discussion, I hope to have the occasion to
> discuss the interesting point you raise in some meeting or in a specific
> topic for it (or you can put the link for the topic if it already
> exists: why GPL agrees with core scientific principles and other
> licenses do not).

Feel free to point also to

        http://developer.imagej.net/why-open-software-vital-science
        http://sciencecodemanifesto.org/discussion
        http://www.youtube.com/watch?v=lGNv0EtqC64

In the meantime, I urge every scientist to use and produce *only* code
that facilitates research (as opposed to making it hard). Even small and
unpolished macros should be published; if there is concern that the coding
style might be criticized, just use an appropriate license [*1*].

Creating knowledge is hard enough. There is no need to make it even
harder. And there is no good reason why scientists should get funding for
making other scientists' research harder, either.

Ciao,
Johannes

Footnote *1*: http://matt.might.net/articles/crapl/

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

Alan Hewat
 Johannes, I sympathise with your feeling that all scientific software
should be open, and that is what I do myself. But there are a couple
of flaws in your otherwise persuasive argument :-)

1) Scientists are not professional programmers. If they have the
source, they will not fully understand it, but will still change it,
and soon you have many different versions. Results of publications
using "that program" will not be reproducible.

2) Less idealistic people can use open software to create closed
software, often without giving credit to the original author. The
question is not whether that is legal or not, but as usually the case
in law, what are you going to do about it ? Is a scientist going to
sue a rich company ?

3) Big labs like INRIA (and you will have to excuse our English,
because we are in France :-) are in a sense "commercial", like lots of
commercial companies that do good science. Your dismissal of the
comparison with Java was a little too quick for me. We can only hope
that INRIA is "not getting funding to make other scientists' research
harder", any more than Oracle.

4) Finally, do "life" scientists really ship live fly mutants around
the world on request :-) As a physicist I see my obligation as limited
to describing what I did, how I did it, what I found, and what I
concluded. I don't ship raw data or samples, and my colleagues
wouldn't expect me to. Much better that they try to reproduce my
results with fresh samples and data, since mine may be corrupt.

But this is not the forum for such a debate. As an occasional ImageJ
user, I will only add that I was a little surprised to find there is
another version on my side of the Atlantic.:-) Yes, it too is open
source, but I sometimes see scientists on this list get different
results with the two versions.

Alan.

On 16 March 2013 17:35, Johannes Schindelin <[hidden email]> wrote:

> Hi Perrine,
>
> On Sat, 16 Mar 2013, Perrine Paul wrote:
>
>> 2013/3/16 Johannes Schindelin <[hidden email]>
>>
>> > On Sat, 16 Mar 2013, perrine wrote:
>> >
>> > > You can download the imageJ/Fiji binaries  from here:
>> > >
>> > http://serpico.rennes.inria.fr/serpico_software/registration.php?filename=hullkground-0.21-imagej.tar.gz
>> > > (you will have to unzip untar and look for the two .class in release
>> > > directory, not very convenient I agree but very powerful tool).
>> >
>> > It is not only inconvenient. By forcing the downloader to accept a
>> > license that disallows modification of the software, it completely
>> > disagrees with the core scientific principles.
>> >
>> > Hence I would suggest not to use it until the time when the software
>> > is available under a license compatible with science.
>>
>> First of all, good news: reading the license text learns you that "you
>> have the right to reproduce, to adapt, to modify, to integrate, to
>> translate the Software, only when such reproduction, adaptation,
>> modification, integration or translation is necessary for the use of the
>> Software for the Purpose ;"
>
> When you ask to get access to the software, you will get a mail that
> contains this text before the link:
>
>         Thank you for downloading Hullkground, before download, please
>         remember that the software you are downloading is licenced. The main
>         point of the licence are:- You can not use the software for
>         commercial use.- You can not modify the software.- You can not
>         distribute the software.- You have to cite the software's authors in
>         every publication in which you use the software results as
>         follow:"Results obtained in using HULLKGROUND Inria 2009, described
>         in the following publication: "A. Chessel, B. Cinquin, S. Bardin, J.
>         Salamero, Ch. Kervrann. COMPUTATIONAL GEOMETRY-BASED SCALE-SPACE AND
>         MODAL IMAGE DECOMPOSITION: application to light video-microscopy
>         imaging. In Conf. on Scale Space and Variational Methods (SSVM'09),
>         Pages 770-781, Voss, Norway, 2009"
>
> Apart from horrible formatting, this sentence sticks out (I just repeat it
> because it might be hard to find because of formatting):
>
>         You can not modify the software.
>
>> It is true that INRIA want to keep an eye on the distribution of their
>> code, and I can understand if you disaprove.
>
> I certainly do not disapprove of keeping an eye on the distribution of
> scientific software! After all, we all have to prove that our research
> proves useful in order to get funded.
>
>> In any case there is also a publication, algorithm is not a secret so
>> anyone can reimplement it.
>
> Science is about reproducability. That is why you are expected to ship
> your, say, fly mutants to any interested lab, free of cost, after
> publishing a finding about said mutant. Note: you are expected to ship
> live animals, not instructions how to make a vector and how to get it into
> your particular wild-type flies.
>
> If you now look at scientific software, the situation is very sad. If
> life science worked that way, in above example nobody would ship you fly
> mutants. You would get either a dead mutant to sequence for yourself or
> you would get a pointer to a paper that has the sequence (but that
> sequence contains errors because nobody ever tried to make the vector from
> said description).
>
> This is not a good way to create knowledge!
>
>> But this license is by the way more flexible that the java license (from
>> Oracle) which does not allow  any modification of the java core as stated
>> in license associated with ImageJ distributed JRE.
>
> Very sorry, I fail to see how Java, being a commercial product by a
> commercially-oriented company is related to what I said about scientific
> culture.
>
>> However this is out of topic discussion, I hope to have the occasion to
>> discuss the interesting point you raise in some meeting or in a specific
>> topic for it (or you can put the link for the topic if it already
>> exists: why GPL agrees with core scientific principles and other
>> licenses do not).
>
> Feel free to point also to
>
>         http://developer.imagej.net/why-open-software-vital-science
>         http://sciencecodemanifesto.org/discussion
>         http://www.youtube.com/watch?v=lGNv0EtqC64
>
> In the meantime, I urge every scientist to use and produce *only* code
> that facilitates research (as opposed to making it hard). Even small and
> unpolished macros should be published; if there is concern that the coding
> style might be criticized, just use an appropriate license [*1*].
>
> Creating knowledge is hard enough. There is no need to make it even
> harder. And there is no good reason why scientists should get funding for
> making other scientists' research harder, either.
>
> Ciao,
> Johannes
>
> Footnote *1*: http://matt.might.net/articles/crapl/
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
______________________________________________
Dr Alan Hewat, NeutronOptics, Grenoble, FRANCE
<[hidden email]> +33.476.98.41.68
        http://www.NeutronOptics.com/hewat
______________________________________________

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

CARL Philippe (LBP)
Dear Johannes,
I have to admit that I have as well been quite touched by your comments yesterday and I would like to put my personal experience on the comment number 2 of Alan.
Indeed, I’m the author of Punias (http://punias.free.fr) which is one of the (if not the) reference software in the field of AFM force spectroscopy.
I started to develop this software in 1999 (the development never stopped up to day) and made profit out of it by couple of publications. After a while I thought that the software could not only be profitable for my own research or the groups I was working in, but the whole AFM community, so I decided to release it as a freeware.
The freeware release required quite a lot of investment in terms or time and work (I guess you agree that software that should be used by a whole community needs to be much cleaner in terms of programming and crashing than a code having only a limited number of users) which I took from my personal time (instead of spending it with my wife or daughters) for the advancement of Science.
And as the software was a freeware and I just finished the implementation of a new and very innovating feature, a commercial company (http://www.imagemet.com) used the fact that the software was freely available in order to copy some features in order to make money out of my ideas and work.
So what should be done in such a situation?
Sue the company for stealing ideas and work that I was giving for free and for which I had put no protection in terms of patent or anything (I even didn’t publish anything about the feature)?
Also and especially, when this stealing happened I was unemployed and in such a situation there are things which are way more important (like looking for a new job) than spending time and/or money in trying to sue a company about intellectual property issues.
Since then Punias is now distributed under a (cheap in order to encourage Science) commercial license. And in this way, I’m very efficiently protecting my work against commercial companies.
So how should I up to you have behaved in the described situation?
Have a nice week-end,
Philippe


Le Dimanche 17 Mars 2013 03:22 CET, Alan Hewat <[hidden email]> a écrit:

>  Johannes, I sympathise with your feeling that all scientific software
> should be open, and that is what I do myself. But there are a couple

> of flaws in your otherwise persuasive argument :-)
>
> 1) Scientists are not professional programmers. If they have the
> source, they will not fully understand it, but will still change it,

> and soon you have many different versions. Results of publications
> using "that program" will not be reproducible.
>
> 2) Less idealistic people can use open software to create closed
> software, often without giving credit to the original author. The
> question is not whether that is legal or not, but as usually the case
> in law, what are you going to do about it ? Is a scientist going to
> sue a rich company ?
>
> 3) Big labs like INRIA (and you will have to excuse our English,
> because we are in France :-) are in a sense "commercial", like lots of
> commercial companies that do good science. Your dismissal of the
> comparison with Java was a little too quick for me. We can only hope

> that INRIA is "not getting funding to make other scientists' research
> harder", any more than Oracle.
>
> 4) Finally, do "life" scientists really ship live fly mutants around

> the world on request :-) As a physicist I see my obligation as limited
> to describing what I did, how I did it, what I found, and what I
> concluded. I don't ship raw data or samples, and my colleagues
> wouldn't expect me to. Much better that they try to reproduce my
> results with fresh samples and data, since mine may be corrupt.
>
> But this is not the forum for such a debate. As an occasional ImageJ

> user, I will only add that I was a little surprised to find there is

> another version on my side of the Atlantic.:-) Yes, it too is open
> source, but I sometimes see scientists on this list get different
> results with the two versions.
>
> Alan.

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

perrine
In reply to this post by dscho
Hi Johannes,
Thanks for the links and the video.

When you ask to get access to the software, you will get a mail that

> contains this text before the link:
>
>         You can not modify the software.
>
>
So here you point out an error in the generation of their email (or the
formulation of the license file), because the license file was stating the
contrary (both in the license described in the link I've sent and the
LICENSE file in the dowloaded directory). That indeed needs to be
clarified. In addition you just get the binary, so not that easy to modify.
 So yes, we can say that it can not be modified.
 In any case, that does not prevent its use by TheoRK or anyone with
similar probleme if it helps: it is at least reproducible (anyone can get
the same binary) and not a magical black box (what I meant by saying the
algorithm was published).


>
> > But this license is by the way more flexible that the java license (from
> > Oracle) which does not allow  any modification of the java core as stated
> > in license associated with ImageJ distributed JRE.
>
> Very sorry, I fail to see how Java, being a commercial product by a
> commercially-oriented company is related to what I said about scientific
> culture.
>
What I meant is that ImageJ is built upon java core libraries that are not
supposed to be modified : for exemple a lot of plugins make use of the
random number generation which is accessible from java API java.util.random
or java.maths.random, but you can not modify it if you are not happy with
it (just built upon). I understood it has started to change (openJDK?), but
the license file still states that it can not be modified. Sorry to have
puzzled you but what I wanted to say was that software can be exploitable
(at least for biological applications, maybe not for other software
writing) and useful (and even reproducible) with a different kind of
license, and that we should be open-minded to different point of view and
try to understand them. I'm pretty sure that Inria which is an institute of
research in computer science, has heard about open source and its
importance.

I'm not sure if boycott is the best solution to push people opening their
code. Asking them directly why they did this choice and argumentation may
work better.

And very sorry that this discussion is not at the right place.
Cheers,
Perrine



> > However this is out of topic discussion, I hope to have the occasion to
> > discuss the interesting point you raise in some meeting or in a specific
> > topic for it (or you can put the link for the topic if it already
> > exists: why GPL agrees with core scientific principles and other
> > licenses do not).
>
> Feel free to point also to
>
>         http://developer.imagej.net/why-open-software-vital-science
>         http://sciencecodemanifesto.org/discussion
>         http://www.youtube.com/watch?v=lGNv0EtqC64
>
> In the meantime, I urge every scientist to use and produce *only* code
> that facilitates research (as opposed to making it hard). Even small and
> unpolished macros should be published; if there is concern that the coding
> style might be criticized, just use an appropriate license [*1*].
>
> Creating knowledge is hard enough. There is no need to make it even
> harder. And there is no good reason why scientists should get funding for
> making other scientists' research harder, either.
>
> Ciao,
> Johannes
>
> Footnote *1*: http://matt.might.net/articles/crapl/
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

dscho
In reply to this post by CARL Philippe (LBP)
Hi Carl,

On Sun, 17 Mar 2013, CARL Philippe (PHA) wrote:

> Indeed, I’m the author of Punias (http://punias.free.fr) which is one of
> the (if not the) reference software in the field of AFM force
> spectroscopy.
>
> I started to develop this software in 1999 (the development never
> stopped up to day) and made profit out of it by couple of publications.
> After a while I thought that the software could not only be profitable
> for my own research or the groups I was working in, but the whole AFM
> community, so I decided to release it as a freeware.
>
> The freeware release required quite a lot of investment in terms or time
> and work (I guess you agree that software that should be used by a whole
> community needs to be much cleaner in terms of programming and crashing
> than a code having only a limited number of users) which I took from my
> personal time (instead of spending it with my wife or daughters) for the
> advancement of Science.
>
> And as the software was a freeware and I just finished the
> implementation of a new and very innovating feature, a commercial
> company (http://www.imagemet.com) used the fact that the software was
> freely available in order to copy some features in order to make money
> out of my ideas and work.
While I am sympathetic to your situation, I have to point out that the
reasons research findings are published is so that other scientists can
learn from it and use the results, improve on them, and in particular so
that other scientists can verify the results.

Scientific results need to be scrutinized. The vast majority of scientists
are honest, but not all of them. Few months go by without a retraction of
a research paper whose conclusions have been scrutinized and found
invalid. This process, while uncomfortable to all parties involved is
necessary for science to produce knowledge.

That is the reason why it is the well-established scientific culture that
requires scientists publishing scientific findings to provide -- within
reasonable limits -- the materials and methods to others so they can try
to reproduce their findings, or to find flaws in their research.

Another reason why this scientific culture was established is that, in
Newton's words, scientists always stand on other shoulders. Put
differently: most, if not all, scientific discoveries lead to other
scientific discoveries. And that is simply much easier when the materials
and methods are made available rather than held back.

Of course, producing knowledge also opens the doors for commercial
entities to profit from said knowledge, and quite often there is a
fruitful interaction between scienctists -- creating knowledge -- and
companies -- using said knowledge to offer services or products that
benefit everybody. Companies are held accountable for the quality of their
products by customers. Scientists, however, are held accountable for the
correctness of their research by other scientists verifying their results.
To foster knowledge, it is absolutely required to facilitate that
validation by other scientists rather than put unnecessary obstacles into
their way.

Ciao,
Johannes

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

ctrueden
In reply to this post by perrine
Hi Perrine,

>  the license file was stating the contrary

I read the license differently. It says:

> you have the right ... to modify ... only when such ... modification
> ... is necessary for the use of the Software for the Purpose

In other words, if the modification is *not* necessary "for the user of the
Software for the Purpose" (where "Purpose" is defined by the copyright
holder) then you cannot modify it.

And the fact that the email also states that you cannot modify the software
clearly sheds light on the intent, which is a good indicator of whether
legal action might be taken.

> I understood it has started to change (openJDK?), but the license file
> still states that it can not be modified.

Indeed, OpenJDK is freely available under the GPL, so this is no longer an
issue.

Also, this same reasoning could be applied to argue that scientific
computing should not be done using Windows or OS X or other non-open source
operating system, since there are core functions used that cannot be freely
modified or inspected. But there are layers of encapsulation involved
here—that is, the further the problem is from the actual implementation of
the algorithms, the less of a negative impact the restrictive licensing has
on the relevant scientific practice.

> it is at least reproducible (anyone can get the same binary) and not a
> magical black box (what I meant by saying the algorithm was
> published).

Reproducible: yes, as long as the binary behaves the same in the
recipient's environment. (If it doesn't, you are out of luck though, since
you don't have the source.)

However, even if the algorithm is published, a binary-only distribution is
still very black-box-like. It is not enough to simply describe the
algorithm in pseudocode, because it is too imprecise. Computer code is too
fragile for that. I have ported many algorithms between different
environments, architectures and systems, and many times there are bugs, or
differences in underlying assumptions between systems, such that the
results differ. Somehow, error gets introduced.

For example, my lab once ported a Java deconvolution algorithm to OpenCL as
faithfully as possible, but for some mysterious reason, the algorithm's
results still differed. It took a lot of investigation, but eventually we
discovered that many GPUs support different rounding methods for floating
point operations, and the default rounding method for our GPU differed from
that of Java. Once we configured our GPU to use the same rounding method as
Java does, the results matched.

My point is: science is all about minimizing error. Releasing the code is
the best way to do that, and most compatible with open scientific inquiry.
It can be much harder to reproduce or build on someone else's research if
you have to first reimplement their code from the published algorithm only.
And science is hard enough already.

Publishing an algorithm is the theory (pure mathematics). The computer code
is the practice (e.g., discrete math). And as we all know, there can be a
very big difference between theory and practice.

> I'm not sure if boycott is the best solution to push people opening
> their code. Asking them directly why they did this choice and
> argumentation may work better.

I agree with you there. It's good to keep an open dialogue.

Regards,
Curtis


On Mon, Mar 18, 2013 at 4:32 AM, Perrine Paul <[hidden email]>wrote:

> Hi Johannes,
> Thanks for the links and the video.
>
> When you ask to get access to the software, you will get a mail that
>
> > contains this text before the link:
> >
> >         You can not modify the software.
> >
> >
> So here you point out an error in the generation of their email (or the
> formulation of the license file), because the license file was stating the
> contrary (both in the license described in the link I've sent and the
> LICENSE file in the dowloaded directory). That indeed needs to be
> clarified. In addition you just get the binary, so not that easy to modify.
>  So yes, we can say that it can not be modified.
>  In any case, that does not prevent its use by TheoRK or anyone with
> similar probleme if it helps: it is at least reproducible (anyone can get
> the same binary) and not a magical black box (what I meant by saying the
> algorithm was published).
>
>
> >
> > > But this license is by the way more flexible that the java license
> (from
> > > Oracle) which does not allow  any modification of the java core as
> stated
> > > in license associated with ImageJ distributed JRE.
> >
> > Very sorry, I fail to see how Java, being a commercial product by a
> > commercially-oriented company is related to what I said about scientific
> > culture.
> >
> What I meant is that ImageJ is built upon java core libraries that are not
> supposed to be modified : for exemple a lot of plugins make use of the
> random number generation which is accessible from java API java.util.random
> or java.maths.random, but you can not modify it if you are not happy with
> it (just built upon). I understood it has started to change (openJDK?), but
> the license file still states that it can not be modified. Sorry to have
> puzzled you but what I wanted to say was that software can be exploitable
> (at least for biological applications, maybe not for other software
> writing) and useful (and even reproducible) with a different kind of
> license, and that we should be open-minded to different point of view and
> try to understand them. I'm pretty sure that Inria which is an institute of
> research in computer science, has heard about open source and its
> importance.
>
> I'm not sure if boycott is the best solution to push people opening their
> code. Asking them directly why they did this choice and argumentation may
> work better.
>
> And very sorry that this discussion is not at the right place.
> Cheers,
> Perrine
>
>
>
> > > However this is out of topic discussion, I hope to have the occasion to
> > > discuss the interesting point you raise in some meeting or in a
> specific
> > > topic for it (or you can put the link for the topic if it already
> > > exists: why GPL agrees with core scientific principles and other
> > > licenses do not).
> >
> > Feel free to point also to
> >
> >         http://developer.imagej.net/why-open-software-vital-science
> >         http://sciencecodemanifesto.org/discussion
> >         http://www.youtube.com/watch?v=lGNv0EtqC64
> >
> > In the meantime, I urge every scientist to use and produce *only* code
> > that facilitates research (as opposed to making it hard). Even small and
> > unpolished macros should be published; if there is concern that the
> coding
> > style might be criticized, just use an appropriate license [*1*].
> >
> > Creating knowledge is hard enough. There is no need to make it even
> > harder. And there is no good reason why scientists should get funding for
> > making other scientists' research harder, either.
> >
> > Ciao,
> > Johannes
> >
> > Footnote *1*: http://matt.might.net/articles/crapl/
> >
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

dscho
In reply to this post by perrine
Hi Perrine (and Theo),

On Mon, 18 Mar 2013, Perrine Paul wrote:

>  In any case, that does not prevent its use by TheoRK or anyone with
>  similar probleme if it helps: it is at least reproducible (anyone can
>  get the same binary) and not a magical black box (what I meant by
>  saying the algorithm was published).

The problem is that it *is* a black box, after all. It might, or it might
not, implement the algorithm described in the paper. It might, or it might
not, implement it correctly.

One thing it is not: validated by other researchers.

So I spent the complete day to work my way through the paper that is
considered the reference that allows you to reimplement the algorithm:

Computational geometry-based scale-space and modal image decomposition.
Application to light video-microscopy imaging, Anatole Chessel, Bertrand
Cinquin, Sabine Bardin, Jean Salamero, and Charles Kervrann.

It talks quite a lot about the alpha parameter which the plugin does not
even ask for.

After a couple of really frustrating hours, I found that another paper's
supplementary material has a quite short, but almost infinitely more
informative description of the algorithm implemented by the Hullkground
plugin:

A Rab11A/Myosin Vb/Rab11-FIP2 Complex Frames Two Late Recycling Steps of
Langerin from the ERC to the Plasma Membrane, Alexandre Gidon, Sabine
Bardin, Bertrand Cinquin, Jerome Boulanger, François Waharte, Laurent
Heliot, Henri de la Salle, Daniel Hanau, Charles Kervrann, Bruno Goud,
Jean Salamero

What remains of the "simplified version of the α shapes-based scale-space
method" as it is described at

        http://raweb.inria.fr/rapportsactivite/RA2011/serpico/uid25.html

(and also pointing to the completely unhelpful first reference) is nothing
else than the following:

        for each pixel, extract a plot of values along the time axis,
        use the lower convex hull as background,
        subtract the background from the original image data to obtain the
foreground

> I'm not sure if boycott is the best solution to push people opening
> their code. Asking them directly why they did this choice and
> argumentation may work better.

Please do not misrepresent my statements as bullying. This has nothing to
do with pushing or even a boycott.

It has to do with proper scientific research where you need to understand
the details of the algorithms you use, which often requires inspection of
the implementation details even if they might have been published in a
scientific article.

In the particular case of Hullkground, there was no good description of
the algorithm in any published paper (indeed, what is described in the
Chessel paper is still not available as plugin at all). It took a
substantial amount of time and effort for me to reproduce what the plugin
does. Only then was I in a position to assess the suitability of the
plugin for the application at hand. This time and effort was spent only
due to the fact that unlike reagents and other materials and methods used
in the Gidon paper, the source code of the plugin was not made available
to other researchers.

It is my finding that the Hullkground algorithm may not be well suited for
the original poster's case. The background subtraction is non-uniform
(e.g. the first and the last frame's estimated background is a uniform
0-valued image, hardly what biologists would expect). The estimated
background is piece-wise linear in the time dimension, leading to
artifacts with many quantitative processing algorithms.

Under certain circumstances, however, it might be well suited:

- the frames of interest are substantially removed from the start and the
  end of the time-lapse
- the noise must be additive (i.e. no noise may diminish the signals of
  interest)
- the objects of interest must move quickly
- the background subtraction is only used to identify the locations of the
  objects of interest, but quantitative analysis is performed on the
  original pixel values.

To prevent other researchers from having to spend the same amount of time
as I had to, here is the source code I came up with (whose results agree
with the original Hullkground plugin except in case of 5-dimensional
images -- on which the original plugin produces incorrect results,
probably due to a faulty implementation) -- feel free to use and/or extend
it, in which case I would like to suggest to cite the Gidon paper and
Schneider CA, Rasband WS, Eliceiri KW: NIH Image to ImageJ: 25 years of
image analysis:

-- snipsnap --
import ij.CompositeImage;
import ij.IJ;
import ij.ImagePlus;
import ij.ImageStack;
import ij.WindowManager;

import ij.gui.GenericDialog;

import ij.measure.Calibration;

import ij.plugin.filter.PlugInFilter;

import ij.process.ImageProcessor;
import ij.process.StackConverter;


/**
 * Open Source reimplementation of the Hullkground plugin
 *
 * The idea has little to do with alpha shapes. It is simply
 * estimating the background as the pixel-wise lower convex
 * hull over the temporal axis.
 *
 * Author: Johannes Schindelin
 */
public class My_Hull_Background implements PlugInFilter {
        private ImagePlus image;

        @Override
        public int setup(final String arg, final ImagePlus imp) {
                if (image == null) {
                        image = WindowManager.getCurrentImage();
                }
                if (image.getNFrames() < 3) {
                        IJ.error("Need at least 3 frames");
                }
                image = imp;
                return DOES_8G | DOES_16 | DOES_32 | NO_CHANGES;
        }

        @Override
        public void run(final ImageProcessor ip) {
                final ImagePlus[] result = process(this.image);
                for (final ImagePlus image : result) {
                        image.show();
                }
        }

        public ImagePlus[] process(ImagePlus image) {
                if (image.getType() != ImagePlus.GRAY32) {
                        new StackConverter(image).convertToGray32();
                }

                final int width = image.getWidth();
                final int height = image.getHeight();
                final int nChannels = image.getNChannels();
                final int nSlices = image.getNSlices();
                final int nFrames = image.getNFrames();

                final ImageStack stack = image.getStack();
                final ImageStack result = new ImageStack(width, height, nChannels * nSlices * nFrames);
                final ImageStack backgroundResult = new ImageStack(width, height, nChannels * nSlices * nFrames);
                float[][] pixels = new float[nFrames][];
                float[][] resultPixels = new float[nFrames][];
                float[][] backgroundPixels = new float[nFrames][];
                int[] hull = new int[nFrames];
                for (int slice = 0; slice < nSlices; slice++) {
                        for (int channel = 0; channel < nChannels; channel++) {
                                IJ.showProgress(channel + nChannels * slice, nChannels * nSlices);
                                for (int frame = 0; frame < nFrames; frame++) {
                                        int index = image.getStackIndex(channel + 1, slice + 1, frame + 1);
                                        pixels[frame] = (float[])stack.getProcessor(index).getPixels();
                                        resultPixels[frame] = new float[width * height];
                                        result.setPixels(resultPixels[frame], index);
                                        backgroundPixels[frame] = new float[width * height];
                                        backgroundResult.setPixels(backgroundPixels[frame], index);
                                }

                                for (int y = 0; y < height; y++) {
                                        for (int x = 0; x < width; x++) {
                                                int index = x + width * y;
                                                calculateLowerHull(index, pixels, hull);

                                                int segment = 1;
                                                int frame0 = hull[0];
                                                float y0 = pixels[frame0][index];
                                                int frame1 = hull[1];
                                                float y1 = pixels[frame1][index];
                                                for (int frame = 0; frame < nFrames; frame++) {
                                                        if (frame > frame1) {
                                                                frame0 = frame1;
                                                                y0 = y1;
                                                                frame1 = hull[++segment];
                                                                y1 = pixels[frame1][index];
                                                        }
                                                        backgroundPixels[frame][index] = y0 + (y1 - y0) * (frame - frame0) / (frame1 - frame0);
                                                        resultPixels[frame][index] = pixels[frame][index] - backgroundPixels[frame][index];
                                                }
                                        }
                                }
                        }
                }

                final String title = image.getTitle();
                final ImagePlus resultImage = new ImagePlus("Foreground of " + title, result);
                resultImage.setDimensions(nChannels, nSlices, nFrames);
                final ImagePlus backgroundImage = new ImagePlus("Background of " + title, backgroundResult);
                backgroundImage.setDimensions(nChannels, nSlices, nFrames);

                final Calibration calibration = image.getCalibration();
                if (calibration != null) {
                        resultImage.setCalibration(calibration);
                        backgroundImage.setCalibration(calibration);
                }

                return new ImagePlus[] { new CompositeImage(resultImage), new CompositeImage(backgroundImage) };
        }

        /*
         * pixels is an array of frames
         * index corresponds to the (x,y) pixel position in the pixels
         * arrays
         * hull will contain frame numbers of the lower hull
         */
        private void calculateLowerHull(final int index, final float[][] pixels, final int[] hull) {
                int i = 0;
                hull[i++] = 0;
                hull[i++] = 1;
                for (int frame = 2; frame < pixels.length; frame++) {
                        // would the next segment angle down?
                        float y2 = pixels[frame][index];
                        while (i > 1) {
                                int frame0 = hull[i - 2];
                                float y0 = pixels[frame0][index];
                                int frame1 = hull[i - 1];
                                float y1 = pixels[frame1][index];

                                if ((y1 - y0) * (frame - frame1) + (frame1 - frame0) * (y1 - y2) < 0) {
                                        break;
                                }
                                i--;
                        }
                        hull[i++] = frame;
                }
        }
}

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

Alan Hewat
> So I spent the complete day to work my way through the paper...

And you still had time for helpful comments about a range of other subjects :-)

I am not sure that your experience with this particular paper proves
your thesis that all scientific software must be open source. Indeed
you demonstrate that given a sufficient description of a scientific
method, it is possible to reproduce the result. You would say that a
sufficient description is no less than the source code, but your
experience disproves that. Indeed your re-implementation of the
algorithm is more valuable than picking through someone else's code -
and much more amusing.

> After a couple of really frustrating hours...

OK, science wasn't meant to be easy :-) What you discovered in this
particular case appears to be a rather simple algorithm - finding the
lowest intensity for each pixel over time and taking that as the
background. I suppose if the image is faint and noisy, you might need
to fit those points to some kind of 2D smoothing function, so it may
not be quite as trivial as it appears. And your test for reproducing
the algorithm may depend on comparison with a number of real data
sets, in particular noisy data.

I just don't see how you can categorically affirm that all useful
scientific software must be open source, when there are so many
examples that disprove it. Picking on a particular paper and pointing
out its faults is fine, but so what ?

I suppose that the majority of ImageJ users don't really care if the
core source code is available or not. What is important is that it is
"open" to the extent that people can write extensions for things
unforeseen by the original authors. Most scientists (there are a few
exceptions :-) will not go through the code to check its validity -
their expertise lies elsewhere. They will look at the result and
decide if it is credible in the light of other results obtained by
other people with other methods.

Alan

On 18 March 2013 21:55, Johannes Schindelin <[hidden email]> wrote:

> Hi Perrine (and Theo),
>
> On Mon, 18 Mar 2013, Perrine Paul wrote:
>
>>  In any case, that does not prevent its use by TheoRK or anyone with
>>  similar probleme if it helps: it is at least reproducible (anyone can
>>  get the same binary) and not a magical black box (what I meant by
>>  saying the algorithm was published).
>
> The problem is that it *is* a black box, after all. It might, or it might
> not, implement the algorithm described in the paper. It might, or it might
> not, implement it correctly.
>
> One thing it is not: validated by other researchers.
>
> So I spent the complete day to work my way through the paper that is
> considered the reference that allows you to reimplement the algorithm:
>
> Computational geometry-based scale-space and modal image decomposition.
> Application to light video-microscopy imaging, Anatole Chessel, Bertrand
> Cinquin, Sabine Bardin, Jean Salamero, and Charles Kervrann.
>
> It talks quite a lot about the alpha parameter which the plugin does not
> even ask for.
>
> After a couple of really frustrating hours, I found that another paper's
> supplementary material has a quite short, but almost infinitely more
> informative description of the algorithm implemented by the Hullkground
> plugin:
>
> A Rab11A/Myosin Vb/Rab11-FIP2 Complex Frames Two Late Recycling Steps of
> Langerin from the ERC to the Plasma Membrane, Alexandre Gidon, Sabine
> Bardin, Bertrand Cinquin, Jerome Boulanger, François Waharte, Laurent
> Heliot, Henri de la Salle, Daniel Hanau, Charles Kervrann, Bruno Goud,
> Jean Salamero
>
> What remains of the "simplified version of the α shapes-based scale-space
> method" as it is described at
>
>         http://raweb.inria.fr/rapportsactivite/RA2011/serpico/uid25.html
>
> (and also pointing to the completely unhelpful first reference) is nothing
> else than the following:
>
>         for each pixel, extract a plot of values along the time axis,
>         use the lower convex hull as background,
>         subtract the background from the original image data to obtain the
> foreground
>
>> I'm not sure if boycott is the best solution to push people opening
>> their code. Asking them directly why they did this choice and
>> argumentation may work better.
>
> Please do not misrepresent my statements as bullying. This has nothing to
> do with pushing or even a boycott.
>
> It has to do with proper scientific research where you need to understand
> the details of the algorithms you use, which often requires inspection of
> the implementation details even if they might have been published in a
> scientific article.
>
> In the particular case of Hullkground, there was no good description of
> the algorithm in any published paper (indeed, what is described in the
> Chessel paper is still not available as plugin at all). It took a
> substantial amount of time and effort for me to reproduce what the plugin
> does. Only then was I in a position to assess the suitability of the
> plugin for the application at hand. This time and effort was spent only
> due to the fact that unlike reagents and other materials and methods used
> in the Gidon paper, the source code of the plugin was not made available
> to other researchers.
>
> It is my finding that the Hullkground algorithm may not be well suited for
> the original poster's case. The background subtraction is non-uniform
> (e.g. the first and the last frame's estimated background is a uniform
> 0-valued image, hardly what biologists would expect). The estimated
> background is piece-wise linear in the time dimension, leading to
> artifacts with many quantitative processing algorithms.
>
> Under certain circumstances, however, it might be well suited:
>
> - the frames of interest are substantially removed from the start and the
>   end of the time-lapse
> - the noise must be additive (i.e. no noise may diminish the signals of
>   interest)
> - the objects of interest must move quickly
> - the background subtraction is only used to identify the locations of the
>   objects of interest, but quantitative analysis is performed on the
>   original pixel values.
>
> To prevent other researchers from having to spend the same amount of time
> as I had to, here is the source code I came up with (whose results agree
> with the original Hullkground plugin except in case of 5-dimensional
> images -- on which the original plugin produces incorrect results,
> probably due to a faulty implementation) -- feel free to use and/or extend
> it, in which case I would like to suggest to cite the Gidon paper and
> Schneider CA, Rasband WS, Eliceiri KW: NIH Image to ImageJ: 25 years of
> image analysis:
>
> -- snipsnap --
> import ij.CompositeImage;
> import ij.IJ;
> import ij.ImagePlus;
> import ij.ImageStack;
> import ij.WindowManager;
>
> import ij.gui.GenericDialog;
>
> import ij.measure.Calibration;
>
> import ij.plugin.filter.PlugInFilter;
>
> import ij.process.ImageProcessor;
> import ij.process.StackConverter;
>
>
> /**
>  * Open Source reimplementation of the Hullkground plugin
>  *
>  * The idea has little to do with alpha shapes. It is simply
>  * estimating the background as the pixel-wise lower convex
>  * hull over the temporal axis.
>  *
>  * Author: Johannes Schindelin
>  */
> public class My_Hull_Background implements PlugInFilter {
>         private ImagePlus image;
>
>         @Override
>         public int setup(final String arg, final ImagePlus imp) {
>                 if (image == null) {
>                         image = WindowManager.getCurrentImage();
>                 }
>                 if (image.getNFrames() < 3) {
>                         IJ.error("Need at least 3 frames");
>                 }
>                 image = imp;
>                 return DOES_8G | DOES_16 | DOES_32 | NO_CHANGES;
>         }
>
>         @Override
>         public void run(final ImageProcessor ip) {
>                 final ImagePlus[] result = process(this.image);
>                 for (final ImagePlus image : result) {
>                         image.show();
>                 }
>         }
>
>         public ImagePlus[] process(ImagePlus image) {
>                 if (image.getType() != ImagePlus.GRAY32) {
>                         new StackConverter(image).convertToGray32();
>                 }
>
>                 final int width = image.getWidth();
>                 final int height = image.getHeight();
>                 final int nChannels = image.getNChannels();
>                 final int nSlices = image.getNSlices();
>                 final int nFrames = image.getNFrames();
>
>                 final ImageStack stack = image.getStack();
>                 final ImageStack result = new ImageStack(width, height, nChannels * nSlices * nFrames);
>                 final ImageStack backgroundResult = new ImageStack(width, height, nChannels * nSlices * nFrames);
>                 float[][] pixels = new float[nFrames][];
>                 float[][] resultPixels = new float[nFrames][];
>                 float[][] backgroundPixels = new float[nFrames][];
>                 int[] hull = new int[nFrames];
>                 for (int slice = 0; slice < nSlices; slice++) {
>                         for (int channel = 0; channel < nChannels; channel++) {
>                                 IJ.showProgress(channel + nChannels * slice, nChannels * nSlices);
>                                 for (int frame = 0; frame < nFrames; frame++) {
>                                         int index = image.getStackIndex(channel + 1, slice + 1, frame + 1);
>                                         pixels[frame] = (float[])stack.getProcessor(index).getPixels();
>                                         resultPixels[frame] = new float[width * height];
>                                         result.setPixels(resultPixels[frame], index);
>                                         backgroundPixels[frame] = new float[width * height];
>                                         backgroundResult.setPixels(backgroundPixels[frame], index);
>                                 }
>
>                                 for (int y = 0; y < height; y++) {
>                                         for (int x = 0; x < width; x++) {
>                                                 int index = x + width * y;
>                                                 calculateLowerHull(index, pixels, hull);
>
>                                                 int segment = 1;
>                                                 int frame0 = hull[0];
>                                                 float y0 = pixels[frame0][index];
>                                                 int frame1 = hull[1];
>                                                 float y1 = pixels[frame1][index];
>                                                 for (int frame = 0; frame < nFrames; frame++) {
>                                                         if (frame > frame1) {
>                                                                 frame0 = frame1;
>                                                                 y0 = y1;
>                                                                 frame1 = hull[++segment];
>                                                                 y1 = pixels[frame1][index];
>                                                         }
>                                                         backgroundPixels[frame][index] = y0 + (y1 - y0) * (frame - frame0) / (frame1 - frame0);
>                                                         resultPixels[frame][index] = pixels[frame][index] - backgroundPixels[frame][index];
>                                                 }
>                                         }
>                                 }
>                         }
>                 }
>
>                 final String title = image.getTitle();
>                 final ImagePlus resultImage = new ImagePlus("Foreground of " + title, result);
>                 resultImage.setDimensions(nChannels, nSlices, nFrames);
>                 final ImagePlus backgroundImage = new ImagePlus("Background of " + title, backgroundResult);
>                 backgroundImage.setDimensions(nChannels, nSlices, nFrames);
>
>                 final Calibration calibration = image.getCalibration();
>                 if (calibration != null) {
>                         resultImage.setCalibration(calibration);
>                         backgroundImage.setCalibration(calibration);
>                 }
>
>                 return new ImagePlus[] { new CompositeImage(resultImage), new CompositeImage(backgroundImage) };
>         }
>
>         /*
>          * pixels is an array of frames
>          * index corresponds to the (x,y) pixel position in the pixels
>          * arrays
>          * hull will contain frame numbers of the lower hull
>          */
>         private void calculateLowerHull(final int index, final float[][] pixels, final int[] hull) {
>                 int i = 0;
>                 hull[i++] = 0;
>                 hull[i++] = 1;
>                 for (int frame = 2; frame < pixels.length; frame++) {
>                         // would the next segment angle down?
>                         float y2 = pixels[frame][index];
>                         while (i > 1) {
>                                 int frame0 = hull[i - 2];
>                                 float y0 = pixels[frame0][index];
>                                 int frame1 = hull[i - 1];
>                                 float y1 = pixels[frame1][index];
>
>                                 if ((y1 - y0) * (frame - frame1) + (frame1 - frame0) * (y1 - y2) < 0) {
>                                         break;
>                                 }
>                                 i--;
>                         }
>                         hull[i++] = frame;
>                 }
>         }
> }
______________________________________________
Dr Alan Hewat, NeutronOptics, Grenoble, FRANCE
<[hidden email]> +33.476.98.41.68
        http://www.NeutronOptics.com/hewat
______________________________________________

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

bob Woolery
Unless independent replication is given up as a lost cause, methods sections of papers should make it possible to execute the same program with the same raw data and observe the same (claimed) result, then to collect fresh data which can be put through the same program with some degree of certainty that the results will be comparable, whether confirming, refuting, or complicating the original paper's conclusions.  
  When replication would require commercial (MATLAB) or specially written closed source software, proper disclosure simply has not been attempted.
JMO,

bob Woolery, DC
stateoftheartchiro.com
326 deAnza dr
Vallejo, CA 94589
(707) 557-5471

-----Original Message-----
From: ImageJ Interest Group [mailto:[hidden email]] On Behalf Of Alan Hewat
Sent: Monday, March 18, 2013 5:30 PM
To: [hidden email]
Subject: Re: Recovering the (static) background of a series of timelapse images

> So I spent the complete day to work my way through the paper...

And you still had time for helpful comments about a range of other subjects :-)

I am not sure that your experience with this particular paper proves
your thesis that all scientific software must be open source. Indeed
you demonstrate that given a sufficient description of a scientific
method, it is possible to reproduce the result. You would say that a
sufficient description is no less than the source code, but your
experience disproves that. Indeed your re-implementation of the
algorithm is more valuable than picking through someone else's code -
and much more amusing.

> After a couple of really frustrating hours...

OK, science wasn't meant to be easy :-) What you discovered in this
particular case appears to be a rather simple algorithm - finding the
lowest intensity for each pixel over time and taking that as the
background. I suppose if the image is faint and noisy, you might need
to fit those points to some kind of 2D smoothing function, so it may
not be quite as trivial as it appears. And your test for reproducing
the algorithm may depend on comparison with a number of real data
sets, in particular noisy data.

I just don't see how you can categorically affirm that all useful
scientific software must be open source, when there are so many
examples that disprove it. Picking on a particular paper and pointing
out its faults is fine, but so what ?

I suppose that the majority of ImageJ users don't really care if the
core source code is available or not. What is important is that it is
"open" to the extent that people can write extensions for things
unforeseen by the original authors. Most scientists (there are a few
exceptions :-) will not go through the code to check its validity -
their expertise lies elsewhere. They will look at the result and
decide if it is credible in the light of other results obtained by
other people with other methods.

Alan

On 18 March 2013 21:55, Johannes Schindelin <[hidden email]> wrote:

> Hi Perrine (and Theo),
>
> On Mon, 18 Mar 2013, Perrine Paul wrote:
>
>>  In any case, that does not prevent its use by TheoRK or anyone with
>>  similar probleme if it helps: it is at least reproducible (anyone can
>>  get the same binary) and not a magical black box (what I meant by
>>  saying the algorithm was published).
>
> The problem is that it *is* a black box, after all. It might, or it might
> not, implement the algorithm described in the paper. It might, or it might
> not, implement it correctly.
>
> One thing it is not: validated by other researchers.
>
> So I spent the complete day to work my way through the paper that is
> considered the reference that allows you to reimplement the algorithm:
>
> Computational geometry-based scale-space and modal image decomposition.
> Application to light video-microscopy imaging, Anatole Chessel, Bertrand
> Cinquin, Sabine Bardin, Jean Salamero, and Charles Kervrann.
>
> It talks quite a lot about the alpha parameter which the plugin does not
> even ask for.
>
> After a couple of really frustrating hours, I found that another paper's
> supplementary material has a quite short, but almost infinitely more
> informative description of the algorithm implemented by the Hullkground
> plugin:
>
> A Rab11A/Myosin Vb/Rab11-FIP2 Complex Frames Two Late Recycling Steps of
> Langerin from the ERC to the Plasma Membrane, Alexandre Gidon, Sabine
> Bardin, Bertrand Cinquin, Jerome Boulanger, François Waharte, Laurent
> Heliot, Henri de la Salle, Daniel Hanau, Charles Kervrann, Bruno Goud,
> Jean Salamero
>
> What remains of the "simplified version of the α shapes-based scale-space
> method" as it is described at
>
>         http://raweb.inria.fr/rapportsactivite/RA2011/serpico/uid25.html
>
> (and also pointing to the completely unhelpful first reference) is nothing
> else than the following:
>
>         for each pixel, extract a plot of values along the time axis,
>         use the lower convex hull as background,
>         subtract the background from the original image data to obtain the
> foreground
>
>> I'm not sure if boycott is the best solution to push people opening
>> their code. Asking them directly why they did this choice and
>> argumentation may work better.
>
> Please do not misrepresent my statements as bullying. This has nothing to
> do with pushing or even a boycott.
>
> It has to do with proper scientific research where you need to understand
> the details of the algorithms you use, which often requires inspection of
> the implementation details even if they might have been published in a
> scientific article.
>
> In the particular case of Hullkground, there was no good description of
> the algorithm in any published paper (indeed, what is described in the
> Chessel paper is still not available as plugin at all). It took a
> substantial amount of time and effort for me to reproduce what the plugin
> does. Only then was I in a position to assess the suitability of the
> plugin for the application at hand. This time and effort was spent only
> due to the fact that unlike reagents and other materials and methods used
> in the Gidon paper, the source code of the plugin was not made available
> to other researchers.
>
> It is my finding that the Hullkground algorithm may not be well suited for
> the original poster's case. The background subtraction is non-uniform
> (e.g. the first and the last frame's estimated background is a uniform
> 0-valued image, hardly what biologists would expect). The estimated
> background is piece-wise linear in the time dimension, leading to
> artifacts with many quantitative processing algorithms.
>
> Under certain circumstances, however, it might be well suited:
>
> - the frames of interest are substantially removed from the start and the
>   end of the time-lapse
> - the noise must be additive (i.e. no noise may diminish the signals of
>   interest)
> - the objects of interest must move quickly
> - the background subtraction is only used to identify the locations of the
>   objects of interest, but quantitative analysis is performed on the
>   original pixel values.
>
> To prevent other researchers from having to spend the same amount of time
> as I had to, here is the source code I came up with (whose results agree
> with the original Hullkground plugin except in case of 5-dimensional
> images -- on which the original plugin produces incorrect results,
> probably due to a faulty implementation) -- feel free to use and/or extend
> it, in which case I would like to suggest to cite the Gidon paper and
> Schneider CA, Rasband WS, Eliceiri KW: NIH Image to ImageJ: 25 years of
> image analysis:
>
> -- snipsnap --
> import ij.CompositeImage;
> import ij.IJ;
> import ij.ImagePlus;
> import ij.ImageStack;
> import ij.WindowManager;
>
> import ij.gui.GenericDialog;
>
> import ij.measure.Calibration;
>
> import ij.plugin.filter.PlugInFilter;
>
> import ij.process.ImageProcessor;
> import ij.process.StackConverter;
>
>
> /**
>  * Open Source reimplementation of the Hullkground plugin
>  *
>  * The idea has little to do with alpha shapes. It is simply
>  * estimating the background as the pixel-wise lower convex
>  * hull over the temporal axis.
>  *
>  * Author: Johannes Schindelin
>  */
> public class My_Hull_Background implements PlugInFilter {
>         private ImagePlus image;
>
>         @Override
>         public int setup(final String arg, final ImagePlus imp) {
>                 if (image == null) {
>                         image = WindowManager.getCurrentImage();
>                 }
>                 if (image.getNFrames() < 3) {
>                         IJ.error("Need at least 3 frames");
>                 }
>                 image = imp;
>                 return DOES_8G | DOES_16 | DOES_32 | NO_CHANGES;
>         }
>
>         @Override
>         public void run(final ImageProcessor ip) {
>                 final ImagePlus[] result = process(this.image);
>                 for (final ImagePlus image : result) {
>                         image.show();
>                 }
>         }
>
>         public ImagePlus[] process(ImagePlus image) {
>                 if (image.getType() != ImagePlus.GRAY32) {
>                         new StackConverter(image).convertToGray32();
>                 }
>
>                 final int width = image.getWidth();
>                 final int height = image.getHeight();
>                 final int nChannels = image.getNChannels();
>                 final int nSlices = image.getNSlices();
>                 final int nFrames = image.getNFrames();
>
>                 final ImageStack stack = image.getStack();
>                 final ImageStack result = new ImageStack(width, height, nChannels * nSlices * nFrames);
>                 final ImageStack backgroundResult = new ImageStack(width, height, nChannels * nSlices * nFrames);
>                 float[][] pixels = new float[nFrames][];
>                 float[][] resultPixels = new float[nFrames][];
>                 float[][] backgroundPixels = new float[nFrames][];
>                 int[] hull = new int[nFrames];
>                 for (int slice = 0; slice < nSlices; slice++) {
>                         for (int channel = 0; channel < nChannels; channel++) {
>                                 IJ.showProgress(channel + nChannels * slice, nChannels * nSlices);
>                                 for (int frame = 0; frame < nFrames; frame++) {
>                                         int index = image.getStackIndex(channel + 1, slice + 1, frame + 1);
>                                         pixels[frame] = (float[])stack.getProcessor(index).getPixels();
>                                         resultPixels[frame] = new float[width * height];
>                                         result.setPixels(resultPixels[frame], index);
>                                         backgroundPixels[frame] = new float[width * height];
>                                         backgroundResult.setPixels(backgroundPixels[frame], index);
>                                 }
>
>                                 for (int y = 0; y < height; y++) {
>                                         for (int x = 0; x < width; x++) {
>                                                 int index = x + width * y;
>                                                 calculateLowerHull(index, pixels, hull);
>
>                                                 int segment = 1;
>                                                 int frame0 = hull[0];
>                                                 float y0 = pixels[frame0][index];
>                                                 int frame1 = hull[1];
>                                                 float y1 = pixels[frame1][index];
>                                                 for (int frame = 0; frame < nFrames; frame++) {
>                                                         if (frame > frame1) {
>                                                                 frame0 = frame1;
>                                                                 y0 = y1;
>                                                                 frame1 = hull[++segment];
>                                                                 y1 = pixels[frame1][index];
>                                                         }
>                                                         backgroundPixels[frame][index] = y0 + (y1 - y0) * (frame - frame0) / (frame1 - frame0);
>                                                         resultPixels[frame][index] = pixels[frame][index] - backgroundPixels[frame][index];
>                                                 }
>                                         }
>                                 }
>                         }
>                 }
>
>                 final String title = image.getTitle();
>                 final ImagePlus resultImage = new ImagePlus("Foreground of " + title, result);
>                 resultImage.setDimensions(nChannels, nSlices, nFrames);
>                 final ImagePlus backgroundImage = new ImagePlus("Background of " + title, backgroundResult);
>                 backgroundImage.setDimensions(nChannels, nSlices, nFrames);
>
>                 final Calibration calibration = image.getCalibration();
>                 if (calibration != null) {
>                         resultImage.setCalibration(calibration);
>                         backgroundImage.setCalibration(calibration);
>                 }
>
>                 return new ImagePlus[] { new CompositeImage(resultImage), new CompositeImage(backgroundImage) };
>         }
>
>         /*
>          * pixels is an array of frames
>          * index corresponds to the (x,y) pixel position in the pixels
>          * arrays
>          * hull will contain frame numbers of the lower hull
>          */
>         private void calculateLowerHull(final int index, final float[][] pixels, final int[] hull) {
>                 int i = 0;
>                 hull[i++] = 0;
>                 hull[i++] = 1;
>                 for (int frame = 2; frame < pixels.length; frame++) {
>                         // would the next segment angle down?
>                         float y2 = pixels[frame][index];
>                         while (i > 1) {
>                                 int frame0 = hull[i - 2];
>                                 float y0 = pixels[frame0][index];
>                                 int frame1 = hull[i - 1];
>                                 float y1 = pixels[frame1][index];
>
>                                 if ((y1 - y0) * (frame - frame1) + (frame1 - frame0) * (y1 - y2) < 0) {
>                                         break;
>                                 }
>                                 i--;
>                         }
>                         hull[i++] = frame;
>                 }
>         }
> }
______________________________________________
Dr Alan Hewat, NeutronOptics, Grenoble, FRANCE
<[hidden email]> +33.476.98.41.68
        http://www.NeutronOptics.com/hewat
______________________________________________

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

Anatole Chessel
In reply to this post by TheoRK
Hi all,


    While I am happy and proud to be indirectly the cause of a discussion on open source in science (of which I am an unconditional defender), please allow me to comment on a couple of points:
    -the licence under which hulkground was released is not of my choosing, but INRIA's, my employer at the time. Had I any choice in the matter, the thing would have been GLPed from the start (and not have a licence agreement to sign twice the size of the code). To be perfectly honest (and as I do not work for inria anymore), I toyed at some point with the idea of recoding it during my spare time to build a version that could be released (and incidentally that could actually be used by people) because I do not like closed source (or close-ish, as I think the inria licence is still somewhat open) science code anymore that you do.
    - I did not had time to look into the version that was recoded, but a few comments:
      - yes the hulkground plug-in is quite simple, and is just the convex hull in time. It does have its issues, as all algorithm, as pointed out by Johannes, but can help in some cases. In particular it may indeed not be suited to more quantitative uses. If I may, a comparison of background decomposition algorithms for video microscopy was performed in "Anatole Chessel, Thierry Pécot, Sabine Bardin, Charles Kervrann, Jean Salamero: Evaluation of Image Sequences Additive Decomposition Algorithms for Membrane Analysis in Fluorescence Video-Microscopy. ISBI 2009: 1099-1102 " that reached the same conclusion.
      - The code corresponding to the SSVM paper, on actual alpha-shapes (hulkground being the special case of alpha going to infinity), was not released/used much because it is not a imageJ plugin but a c++ stand-alone built against CGAL (http://www.cgal.org/) (which is a bit of a bitch to work with/compile against imho). And as you said, the problem of choosing alpha is not quite trivial in practice, reason why we stuck to the convex hull for practical application. Can I still ask for the SSVM paper to be cited somewhere? Or maybe the above 2009 isbi paper? (Anything with my name on it:)? ) More generally, there is also an issue of where to publish practical - minded code (as is hulkground and not the alpha-scale space it is base on) as it is not theoretical enough for the image analysis community but is too small for bio method journal, but that is another issue altogether...

  I think I talked enough for this time. Sorry for costing you a day Johannes, if I can be of any help now just tell me...

Anatole

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Recovering the (static) background of a series of timelapse images

Graeme Ball
In reply to this post by TheoRK
Hi,

This may be a bit late, but the question motivated me to re-visit some old code. It seems as though you are trying to segment moving features in an image sequence with poor contrast? (apologies if I have that wrong) If so, identifying the moving foreground should enable segmentation.

I wrote a temporal median filter with soft thresholding as part of project to track microtubule +ends a few years back, and I have spent the last day or two translating the MATLAB code to an ImageJ1 plugin.
It's a bit of a work-in-progress, but if anyone wants to try it out, you can find it here:-
  http://www.micron.ox.ac.uk/microngroup/software/Temporal_plugins.jar
  https://github.com/graemeball/IJ_Temporal

Caveats:
1. there is a problem with hyperstack dimension ordering (I should be able to fix it, but for now split the channels)
2. the result is suitable for subsequent filtering and segmentation, but is no longer a real image
3. the code is rather rough, and does not support multithreading

Best Regards,

Graeme

--
---------------------------
Dr. Graeme Ball
Department of Biochemistry
University of Oxford
South Parks Road
Oxford OX1 3QU
Phone +44-1865-613-359
---------------------------

________________________________________
From: ImageJ Interest Group [[hidden email]] on behalf of TheoRK [[hidden email]]
Sent: 14 March 2013 17:39
To: [hidden email]
Subject: Recovering the (static) background of a series of timelapse images

Hi ImageJ List,

I am kind of desperate, since I do not even know the right words to search
with and it seems ImageJ does not seem to have this function by default:

I want to extract a background image from a series of timelapse images, to
substract the quite uneven but not moving background. I tried remove
background (the rolling ball one), I tried the ROI background substractor.
Both get rid of some of the noise, but not of the things that where
somewhere out of focus and are not moving, but messing up the picture. The
microscope has a shading correction (and I know ImageJ can do that too), but
I have to extract an image that only contains the non-moving parts first...

Can't be that I am the first person that tried that before?



--
View this message in context: http://imagej.1557.n6.nabble.com/Recovering-the-static-background-of-a-series-of-timelapse-images-tp5002169.html
Sent from the ImageJ mailing list archive at Nabble.com.

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html