How does Multiview-Reconstruction compare to Zeiss?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

How does Multiview-Reconstruction compare to Zeiss?

Mario Emmenlauer-3
Dear All,

I'm trying to automate the workflow from a Zeiss Lightsheet Z.1 to
the registered/fused/deconvolved volume. Previously we used the Zeiss
software, but I understand that this can not easily be scripted and/or
distributed on a cluster?

What is the best replacement that is scriptable and cluster-runnable?
I've found http://fiji.sc/Multiview-Reconstruction (MR) and it looks
very promising, but how does it exactly compare to the Zeiss software?
I understand Zeiss does not require beads, MR does? And our workflow
uses DualSide Fusion, is that supported with MR? Finally, how do they
compare in terms of quality of the result, is one or the other software
"better" in any aspect by a reasonable margin, or do they compare on
par?

Thanks for feedback, and all the best,

   Mario




--
Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
D-81669 München                          http://www.biodataanalysis.de/

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Assemble xyt overlap stacks

Knecht, David
I have a series of xyt stacks collected in micro-manager at overlapping xy positions.  I want to assemble them into a single large stack with all positions.  What is the best plugin to do that?  Thanks- Dave

Dr. David Knecht
Professor of Molecular and Cell Biology
Core Microscopy Facility Director
University of Connecticut
Storrs, CT 06269
860-486-2200

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Assemble xyt overlap stacks

Saalfeld, Stephan
If I understand you correctly, t is not overlapping, i.e., you would
like to create a time series that shows when different spots were
imaged?

You could do this in TrakEM2.  Save your images as a series if TIF
files, generate an import.txt file with one row per frame in the format:

<frame_file_name> <x_position> <y_position> <t_position>

e.g.

frame_x000y000t000.tif 0 0 0
frame_x000y000t001.tif 0 0 1
frame_x000y000t002.tif 0 0 2

...

frame_x001y000t000.tif 1000 0 100
frame_x001y000t001.tif 1000 0 101
frame_x001y000t002.tif 1000 0 102

...

frame_x04y003t000.tif 4000 3000 2300
frame_x04y003t001.tif 4000 3000 2301
frame_x04y003t002.tif 4000 3000 2302

...


Excel or OpenOffice Calc are great tools to generate such import files.

If I got your question wrong and t is the same for all stacks, use the
Stitching plugin:

http://fiji.sc/Image_Stitching


Best,
Stephan


On Tue, 2014-11-04 at 15:29 +0000, Knecht, David wrote:

> I have a series of xyt stacks collected in micro-manager at overlapping xy positions.  I want to assemble them into a single large stack with all positions.  What is the best plugin to do that?  Thanks- Dave
>
> Dr. David Knecht
> Professor of Molecular and Cell Biology
> Core Microscopy Facility Director
> University of Connecticut
> Storrs, CT 06269
> 860-486-2200
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Assemble xyt overlap stacks

Knecht, David
The stacks were all collected at similar enough times that they all are essentially synchronous.  I don’t want to track any spots, I just want a single stack to analyze further in which the I have registered the images using the overlap as you would do in Align Images.   Dave


On Nov 4, 2014, at 11:20 AM, Stephan Saalfeld <[hidden email]> wrote:

> If I understand you correctly, t is not overlapping, i.e., you would
> like to create a time series that shows when different spots were
> imaged?
>
> You could do this in TrakEM2.  Save your images as a series if TIF
> files, generate an import.txt file with one row per frame in the format:
>
> <frame_file_name> <x_position> <y_position> <t_position>
>
> e.g.
>
> frame_x000y000t000.tif 0 0 0
> frame_x000y000t001.tif 0 0 1
> frame_x000y000t002.tif 0 0 2
>
> ...
>
> frame_x001y000t000.tif 1000 0 100
> frame_x001y000t001.tif 1000 0 101
> frame_x001y000t002.tif 1000 0 102
>
> ...
>
> frame_x04y003t000.tif 4000 3000 2300
> frame_x04y003t001.tif 4000 3000 2301
> frame_x04y003t002.tif 4000 3000 2302
>
> ...
>
>
> Excel or OpenOffice Calc are great tools to generate such import files.
>
> If I got your question wrong and t is the same for all stacks, use the
> Stitching plugin:
>
> http://fiji.sc/Image_Stitching
>
>
> Best,
> Stephan
>
>
> On Tue, 2014-11-04 at 15:29 +0000, Knecht, David wrote:
>> I have a series of xyt stacks collected in micro-manager at overlapping xy positions.  I want to assemble them into a single large stack with all positions.  What is the best plugin to do that?  Thanks- Dave
>>
>> Dr. David Knecht
>> Professor of Molecular and Cell Biology
>> Core Microscopy Facility Director
>> University of Connecticut
>> Storrs, CT 06269
>> 860-486-2200
>>
>> --
>> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html

Dr. David Knecht
Professor of Molecular and Cell Biology
Core Microscopy Facility Director
University of Connecticut
Storrs, CT 06269
860-486-2200

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Assemble xyt overlap stacks

Saalfeld, Stephan
Hi David,

ok, so t is equal for all frames at the same stack index.  There are two
options:

1. You can use the stitching plugin and either automatically align and
stitch the stacks or directly render them using the coordinates that you
provide.  The Wiki explains how to do this.  This requires that each
individual stack has no spatial jitter, i.e. your microscope found the
exact same spot at each frame.

2. If your microscope isn't that precise and the stacks jitter, you
could import the stacks as explained in the earlier mail into TrakEM2
(but with the correct frame index).  Then you could use TrakEM2's
multi-layer mosaic alignment to get everything montaged and aligned
jointly, in the x,y plane and across time.

Best,
Stephan



On Tue, 2014-11-04 at 18:24 +0000, Knecht, David wrote:

> The stacks were all collected at similar enough times that they all are essentially synchronous.  I don’t want to track any spots, I just want a single stack to analyze further in which the I have registered the images using the overlap as you would do in Align Images.   Dave
>
>
> On Nov 4, 2014, at 11:20 AM, Stephan Saalfeld <[hidden email]> wrote:
>
> > If I understand you correctly, t is not overlapping, i.e., you would
> > like to create a time series that shows when different spots were
> > imaged?
> >
> > You could do this in TrakEM2.  Save your images as a series if TIF
> > files, generate an import.txt file with one row per frame in the format:
> >
> > <frame_file_name> <x_position> <y_position> <t_position>
> >
> > e.g.
> >
> > frame_x000y000t000.tif 0 0 0
> > frame_x000y000t001.tif 0 0 1
> > frame_x000y000t002.tif 0 0 2
> >
> > ...
> >
> > frame_x001y000t000.tif 1000 0 100
> > frame_x001y000t001.tif 1000 0 101
> > frame_x001y000t002.tif 1000 0 102
> >
> > ...
> >
> > frame_x04y003t000.tif 4000 3000 2300
> > frame_x04y003t001.tif 4000 3000 2301
> > frame_x04y003t002.tif 4000 3000 2302
> >
> > ...
> >
> >
> > Excel or OpenOffice Calc are great tools to generate such import files.
> >
> > If I got your question wrong and t is the same for all stacks, use the
> > Stitching plugin:
> >
> > http://fiji.sc/Image_Stitching
> >
> >
> > Best,
> > Stephan
> >
> >
> > On Tue, 2014-11-04 at 15:29 +0000, Knecht, David wrote:
> >> I have a series of xyt stacks collected in micro-manager at overlapping xy positions.  I want to assemble them into a single large stack with all positions.  What is the best plugin to do that?  Thanks- Dave
> >>
> >> Dr. David Knecht
> >> Professor of Molecular and Cell Biology
> >> Core Microscopy Facility Director
> >> University of Connecticut
> >> Storrs, CT 06269
> >> 860-486-2200
> >>
> >> --
> >> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
> >
> > --
> > ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>
> Dr. David Knecht
> Professor of Molecular and Cell Biology
> Core Microscopy Facility Director
> University of Connecticut
> Storrs, CT 06269
> 860-486-2200
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: How does Multiview-Reconstruction compare to Zeiss?

Pavel Tomancak
In reply to this post by Mario Emmenlauer-3
Dear Mario,

> I'm trying to automate the workflow from a Zeiss Lightsheet Z.1 to
> the registered/fused/deconvolved volume. Previously we used the Zeiss
> software, but I understand that this can not easily be scripted and/or
> distributed on a cluster?
>
> What is the best replacement that is scriptable and cluster-runnable?

In addition to what you already found, there is also this web page

http://fiji.sc/SPIM_Registration_on_cluster

which details how to run Fiji's SPIMage processing pipelines on a cluster (registration, fusion, deconvolution and HDF5 saving).

> I've found http://fiji.sc/Multiview-Reconstruction (MR) and it looks
> very promising, but how does it exactly compare to the Zeiss software?

We worked very closely with Zeiss when developing the Fiji plugins. The Zeiss software is to a large degree based on our published research:

Preibisch S., Saalfeld S., Schindelin J., Tomancak P. (2010) Software for bead-based registration of selective plane illumination microscopy data. Nature Methods 7:418-419

In ZEN it has been implemented by Zeiss developers independently and since it is not published and the source code is not available I cannot say how similar it is to our implementation when it comes to details. The general approach is similar. As far as I know, Zeiss software does not use global optimisation.

> I understand Zeiss does not require beads, MR does?

Zeiss software can do the reconstruction with and without beads. Similarly the Fiji plugins can either use beads or segmentation in the sample (e.g. nuclei) to achieve the registration. Sometimes it is necessary to use both, first bead-based followed by nuclei based registration. For example look here

http://openspim.org/TriboliumExtraembryonicMembranes

> And our workflow
> uses DualSide Fusion, is that supported with MR?

The DualSide Fusion in done internally in ZEN. We simply start our pipelines with the fused data (i.e. left and right lightsheet merged). It is important to save the data from ZEN as one file per view per time point. This is currently the only dataset that Fiji Bioformat importer can reliably open. Efforts to make it more robust are underway.

> Finally, how do they
> compare in terms of quality of the result, is one or the other software
> "better" in any aspect by a reasonable margin, or do they compare on
> par?

Ok, this is very hard to answer. We are academic researchers and of course we believe that our approach is the best ;-). We have not done rigorous benchmarks against Zeiss software. Anecdotally, on some datasets Fiji works better on others ZEN. You will have to try it for yourself.

Here are some additional resources that you may find useful.

A book chapter describing the SPIMage processing pipeline formally, tutorial style (download it from here http://www.mpi-cbg.de/nc/research/research-groups/pavel-tomancak/papers.html)
Schmied C., Stamataki E., Tomancak P. (2014) Open-source solutions for SPIMage processing. Methods Cell Biol., 123, pp. 505-529

Multi-view deconvolution paper and software (this is, I think, unrelated to the Zeiss deconvolution approach - an alternative).
Preibisch S., Amat F., Stamataki E., Sarov M., Myers E., Tomancak P. (2013) Efficient bayesian multi-view deconvolution Nature Methods, 7, 418–419 http://arxiv.org/abs/1308.0730

http://fiji.sc/Multi-View_Deconvolution

visualisation solution for multi-view SPIM data - BigDataViewer (to be published soon)
http://fiji.sc/BigDataViewer

and finally lots of useful information can be found on the OpenSPIM wiki, particularly in the section dealing with the EMBO course on light sheet microscopy that we organised in Dresden this summer

http://openspim.org/EMBO_practical_course_Light_sheet_microscopy

Don't hesitate to contact us if you have further questions.

All the best

PAvel


>
> Thanks for feedback, and all the best,
>
>   Mario
>
>
>
>
> --
> Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
> Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
> D-81669 München                          http://www.biodataanalysis.de/
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html


--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: How does Multiview-Reconstruction compare to Zeiss?

Mario Emmenlauer-3
Dear Pavel,

thanks a lot for the detailed and helpful answer! Below more:

On 05.11.2014 11:07, Pavel Tomancak wrote:

>> I'm trying to automate the workflow from a Zeiss Lightsheet Z.1 to
>> the registered/fused/deconvolved volume. Previously we used the Zeiss
>> software, but I understand that this can not easily be scripted and/or
>> distributed on a cluster?
>>
>> What is the best replacement that is scriptable and cluster-runnable?
>
> In addition to what you already found, there is also this web page
>
> http://fiji.sc/SPIM_Registration_on_cluster
>
> which details how to run Fiji's SPIMage processing pipelines on a cluster (registration, fusion, deconvolution and HDF5 saving).

I will check this out, thanks!


>> I've found http://fiji.sc/Multiview-Reconstruction (MR) and it looks
>> very promising, but how does it exactly compare to the Zeiss software?
>
> We worked very closely with Zeiss when developing the Fiji plugins. The Zeiss software is to a large degree based on our published research:
>
> Preibisch S., Saalfeld S., Schindelin J., Tomancak P. (2010) Software for bead-based registration of selective plane illumination microscopy data. Nature Methods 7:418-419
>
> In ZEN it has been implemented by Zeiss developers independently and since it is not published and the source code is not available I cannot say how similar it is to our implementation when it comes to details. The general approach is similar. As far as I know, Zeiss software does not use global optimisation.
>
>> I understand Zeiss does not require beads, MR does?
>
> Zeiss software can do the reconstruction with and without beads. Similarly the Fiji plugins can either use beads or segmentation in the sample (e.g. nuclei) to achieve the registration. Sometimes it is necessary to use both, first bead-based followed by nuclei based registration. For example look here
>
> http://openspim.org/TriboliumExtraembryonicMembranes

Thanks, this is very enlightening and helpful. I will check
this in more detail and see how we can profit from segmentation
versus beads...


>> And our workflow
>> uses DualSide Fusion, is that supported with MR?
>
> The DualSide Fusion in done internally in ZEN. We simply start our pipelines with the fused data (i.e. left and right lightsheet merged). It is important to save the data from ZEN as one file per view per time point. This is currently the only dataset that Fiji Bioformat importer can reliably open. Efforts to make it more robust are underway.

My biologists report that the DualSide Fusion with the Zeiss Zen
Software is sometimes done while recording, but this only works for
small Z-stacks for them, because for big stacks it takes the computer
too long and imaging gets delayed. So I understand that this is a
bottleneck for us - did you get this solved somehow?


>> Finally, how do they
>> compare in terms of quality of the result, is one or the other software
>> "better" in any aspect by a reasonable margin, or do they compare on
>> par?
>
> Ok, this is very hard to answer. We are academic researchers and of course we believe that our approach is the best ;-). We have not done rigorous benchmarks against Zeiss software. Anecdotally, on some datasets Fiji works better on others ZEN. You will have to try it for yourself.
>
> Here are some additional resources that you may find useful.
>
> A book chapter describing the SPIMage processing pipeline formally, tutorial style (download it from here http://www.mpi-cbg.de/nc/research/research-groups/pavel-tomancak/papers.html)
> Schmied C., Stamataki E., Tomancak P. (2014) Open-source solutions for SPIMage processing. Methods Cell Biol., 123, pp. 505-529
>
> Multi-view deconvolution paper and software (this is, I think, unrelated to the Zeiss deconvolution approach - an alternative).
> Preibisch S., Amat F., Stamataki E., Sarov M., Myers E., Tomancak P. (2013) Efficient bayesian multi-view deconvolution Nature Methods, 7, 418–419 http://arxiv.org/abs/1308.0730
>
> http://fiji.sc/Multi-View_Deconvolution
>
> visualisation solution for multi-view SPIM data - BigDataViewer (to be published soon)
> http://fiji.sc/BigDataViewer
>
> and finally lots of useful information can be found on the OpenSPIM wiki, particularly in the section dealing with the EMBO course on light sheet microscopy that we organised in Dresden this summer
>
> http://openspim.org/EMBO_practical_course_Light_sheet_microscopy
>
> Don't hesitate to contact us if you have further questions.

I've read already a bit on OpenSPIM, it looks very interesting! Do
I understand correctly that OpenSPIM does not develop or host software
for registration/fusion/deconvolution itself, the software recommended
at OpenSPIM is basically the same Fiji modules you mention also above?
It makes perfect sense, but I'm trying to make sure I'm not overlooking
any (reasonably good) software. The only other software for fusion I
could find is "Spatially-Variant Lucy-Richardson Deconvolution for
Multiview Fusion of Microscopical 3D Images" by Maja Temerinac-Ott et
al, see http://lmb.informatik.uni-freiburg.de/Publications/2011/BRT11/

Is there any other (commercial or non-commercial) option you would be
aware of?

All the best and thanks a lot,

    Mario



--
Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
D-81669 München                          http://www.biodataanalysis.de/

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: How does Multiview-Reconstruction compare to Zeiss?

Pavel Tomancak
>>> And our workflow
>>> uses DualSide Fusion, is that supported with MR?
>>
>> The DualSide Fusion in done internally in ZEN. We simply start our pipelines with the fused data (i.e. left and right lightsheet merged). It is important to save the data from ZEN as one file per view per time point. This is currently the only dataset that Fiji Bioformat importer can reliably open. Efforts to make it more robust are underway.
>
> My biologists report that the DualSide Fusion with the Zeiss Zen
> Software is sometimes done while recording, but this only works for
> small Z-stacks for them, because for big stacks it takes the computer
> too long and imaging gets delayed. So I understand that this is a
> bottleneck for us - did you get this solved somehow?

I don't think we have seen this problem. I cc Christopher Schmied who has more hands on experience.

>
>
>>> Finally, how do they
>>> compare in terms of quality of the result, is one or the other software
>>> "better" in any aspect by a reasonable margin, or do they compare on
>>> par?
>>
>> Ok, this is very hard to answer. We are academic researchers and of course we believe that our approach is the best ;-). We have not done rigorous benchmarks against Zeiss software. Anecdotally, on some datasets Fiji works better on others ZEN. You will have to try it for yourself.
>>
>> Here are some additional resources that you may find useful.
>>
>> A book chapter describing the SPIMage processing pipeline formally, tutorial style (download it from here http://www.mpi-cbg.de/nc/research/research-groups/pavel-tomancak/papers.html)
>> Schmied C., Stamataki E., Tomancak P. (2014) Open-source solutions for SPIMage processing. Methods Cell Biol., 123, pp. 505-529
>>
>> Multi-view deconvolution paper and software (this is, I think, unrelated to the Zeiss deconvolution approach - an alternative).
>> Preibisch S., Amat F., Stamataki E., Sarov M., Myers E., Tomancak P. (2013) Efficient bayesian multi-view deconvolution Nature Methods, 7, 418–419 http://arxiv.org/abs/1308.0730
>>
>> http://fiji.sc/Multi-View_Deconvolution
>>
>> visualisation solution for multi-view SPIM data - BigDataViewer (to be published soon)
>> http://fiji.sc/BigDataViewer
>>
>> and finally lots of useful information can be found on the OpenSPIM wiki, particularly in the section dealing with the EMBO course on light sheet microscopy that we organised in Dresden this summer
>>
>> http://openspim.org/EMBO_practical_course_Light_sheet_microscopy
>>
>> Don't hesitate to contact us if you have further questions.
>
> I've read already a bit on OpenSPIM, it looks very interesting! Do
> I understand correctly that OpenSPIM does not develop or host software
> for registration/fusion/deconvolution itself, the software recommended
> at OpenSPIM is basically the same Fiji modules you mention also above?
> It makes perfect sense, but I'm trying to make sure I'm not overlooking
> any (reasonably good) software.

Exactly, the SPIMage processing software in Fiji applies both to data from LZ1 and OpenSPIM. There is no principal difference, just different data format wrangling issues.

> The only other software for fusion I
> could find is "Spatially-Variant Lucy-Richardson Deconvolution for
> Multiview Fusion of Microscopical 3D Images" by Maja Temerinac-Ott et
> al, see http://lmb.informatik.uni-freiburg.de/Publications/2011/BRT11/

The problem is that as far as I know this is not a software but a paper. It is very unlikely that this approach would scale to large images. But feel free to contact Maja, I am sure she has some code.

>
> Is there any other (commercial or non-commercial) option you would be
> aware of?

I don't know of any solution that would be usable as a downloadable software. There are several papers, like for example

http://www.ncbi.nlm.nih.gov/pubmed/22072386

http://www.ncbi.nlm.nih.gov/pubmed/17339847 (this deconvolution methods has been reimplemented by Stephan Preibisch in Java as part of the deconvolution paper I mentioned before, I don't know if he released that code, he is currently on vacation)

http://www.ncbi.nlm.nih.gov/pubmed/19547131

All the best

PAvel


>
> All the best and thanks a lot,
>
>    Mario
>
>
>
> --
> Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
> Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
> D-81669 München                          http://www.biodataanalysis.de/
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html


--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: How does Multiview-Reconstruction compare to Zeiss?

Stephan Janosch
In reply to this post by Mario Emmenlauer-3
Hey Mario!


> My biologists report that the DualSide Fusion with the Zeiss Zen
> Software is sometimes done while recording, but this only works for
> small Z-stacks for them, because for big stacks it takes the computer
> too long and imaging gets delayed. So I understand that this is a
> bottleneck for us - did you get this solved somehow?

Additionally to Pavels word I want to add a few details here:

We imagine C.elegans. We do Dual Side Illumination, which is not really
needed with our samples, but sometime the right side looks much better
or vice versa. I personally see the other illumination side as a
"backup". We did not understand yet, why we sometimes get different
imageging quality.

We also tried to do the online fusion with the Zen Software. (2014
Black). It worked without a problem, but the benefit was hardly visible.
Sometimes the results looked better, sometimes slightly worse.

After a few test sessions, we decided to do dual illumination, but not
to use the online fusion feature of the Zen software, because our
samples don't really benefit from it.

Best,
Stephan


--
Stephan Janosch
Software Engineer - TransgeneOme Database
https://transgeneome.mpi-cbg.de

Max Planck Institute of Molecular Cell Biology and Genetics
Pfotenhauerstr. 108
01307 Dresden
Germany

Room: 205
Phone: +49 351 210 2709
Email: [hidden email]
Web: www.mpi-cbg.de
Twitter: https://twitter.com/TransgeneOme

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: How does Multiview-Reconstruction compare to Zeiss?

Stephan Preibisch
In reply to this post by Mario Emmenlauer-3
Hi Mario,

first of all, thanks Pavel and Stephan for replying already!

I am just back from vacation and just wanted to add my thoughts here. My multi-view reconstruction software allows the registration based on beads or/and sample features (spotty things within the sample like nuclei, but can be any kinds of bright or dark spots). Right now, there is no “classical” intensity correlation method as it is very slow as affine transformation models need to be estimated to get it right. But the plan is to add it at some point, the software framework supports it.

Regarding the dual sided illumination. From my experience at the EMBO course, it is not advisable to do online dual side fusion. For most samples, the aberration introduced by the specimen itself is significant (I saw around 5-20 pixels) and you will introduce blur if you do online dual sided fusion. Ideally treat them as views that need to be registered as any other views. I am in the process of writing a howto do this right, but it will take more time.

There is an old (plugins > SPIM Registration) and a new version (plugins > Multiview Reconstruction, add it by activating the BigDataViewer update site in Fiji). The old one is completely cluster processable, Pavel sent the link. However, the software itself is outdated by now as it only supports bead-based registration. The new version is much more flexible, powerful, and integrated into the BigDataViewer. Cluster processing already works for the new one, but only for fusing/deconvolving the data. The remaining problem here is that the new version is based on an XML file describing the dataset (same as the BigDataViewer), so when this XML file is edited by segmenting features or writing the affine matrices, some additional software is required to merge the results from different timepoints when processed in paralell. This will be available most likely in December. That’s why it works for fusing, as the XML is only read, not written.

The registration quality using beads should be somewhat comparable for my old plugin, new plugin, and the zeiss software as they are all based on the algorithm, just implemented differently (http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html <http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html>). On a side note, my new plugin also supports CUDA processing. However, the new plugin allows additional registration using affine models based on sample features, i.e. the new plugin is capable of somewhat correcting for sample-induced aberrations (can be done in a second step after the beads registration or only using sample features). Therefore it does produce better results than my old plugin (and I think also the Zeiss software) as they do not support that second registration step.

If you have more questions I am happy to answer.

Have a great day,
Stephan

---

Dr. Stephan Preibisch
HFSP Fellow
Robert H. Singer / Eugene Myers lab

Albert Einstein College of Medicine / HHMI Janelia Farm / MPI-CBG

email: [hidden email] <mailto:[hidden email]> / [hidden email] <mailto:[hidden email]> / [hidden email] <mailto:[hidden email]>
web: http://www.preibisch.net/ <http://fly.mpi-cbg.de/preibisch>

> On Nov 5, 2014, at 7:11 , Mario Emmenlauer <[hidden email]> wrote:
>
> Dear Pavel,
>
> thanks a lot for the detailed and helpful answer! Below more:
>
> On 05.11.2014 11:07, Pavel Tomancak wrote:
>>> I'm trying to automate the workflow from a Zeiss Lightsheet Z.1 to
>>> the registered/fused/deconvolved volume. Previously we used the Zeiss
>>> software, but I understand that this can not easily be scripted and/or
>>> distributed on a cluster?
>>>
>>> What is the best replacement that is scriptable and cluster-runnable?
>>
>> In addition to what you already found, there is also this web page
>>
>> http://fiji.sc/SPIM_Registration_on_cluster
>>
>> which details how to run Fiji's SPIMage processing pipelines on a cluster (registration, fusion, deconvolution and HDF5 saving).
>
> I will check this out, thanks!
>
>
>>> I've found http://fiji.sc/Multiview-Reconstruction (MR) and it looks
>>> very promising, but how does it exactly compare to the Zeiss software?
>>
>> We worked very closely with Zeiss when developing the Fiji plugins. The Zeiss software is to a large degree based on our published research:
>>
>> Preibisch S., Saalfeld S., Schindelin J., Tomancak P. (2010) Software for bead-based registration of selective plane illumination microscopy data. Nature Methods 7:418-419
>>
>> In ZEN it has been implemented by Zeiss developers independently and since it is not published and the source code is not available I cannot say how similar it is to our implementation when it comes to details. The general approach is similar. As far as I know, Zeiss software does not use global optimisation.
>>
>>> I understand Zeiss does not require beads, MR does?
>>
>> Zeiss software can do the reconstruction with and without beads. Similarly the Fiji plugins can either use beads or segmentation in the sample (e.g. nuclei) to achieve the registration. Sometimes it is necessary to use both, first bead-based followed by nuclei based registration. For example look here
>>
>> http://openspim.org/TriboliumExtraembryonicMembranes
>
> Thanks, this is very enlightening and helpful. I will check
> this in more detail and see how we can profit from segmentation
> versus beads...
>
>
>>> And our workflow
>>> uses DualSide Fusion, is that supported with MR?
>>
>> The DualSide Fusion in done internally in ZEN. We simply start our pipelines with the fused data (i.e. left and right lightsheet merged). It is important to save the data from ZEN as one file per view per time point. This is currently the only dataset that Fiji Bioformat importer can reliably open. Efforts to make it more robust are underway.
>
> My biologists report that the DualSide Fusion with the Zeiss Zen
> Software is sometimes done while recording, but this only works for
> small Z-stacks for them, because for big stacks it takes the computer
> too long and imaging gets delayed. So I understand that this is a
> bottleneck for us - did you get this solved somehow?
>
>
>>> Finally, how do they
>>> compare in terms of quality of the result, is one or the other software
>>> "better" in any aspect by a reasonable margin, or do they compare on
>>> par?
>>
>> Ok, this is very hard to answer. We are academic researchers and of course we believe that our approach is the best ;-). We have not done rigorous benchmarks against Zeiss software. Anecdotally, on some datasets Fiji works better on others ZEN. You will have to try it for yourself.
>>
>> Here are some additional resources that you may find useful.
>>
>> A book chapter describing the SPIMage processing pipeline formally, tutorial style (download it from here http://www.mpi-cbg.de/nc/research/research-groups/pavel-tomancak/papers.html)
>> Schmied C., Stamataki E., Tomancak P. (2014) Open-source solutions for SPIMage processing. Methods Cell Biol., 123, pp. 505-529
>>
>> Multi-view deconvolution paper and software (this is, I think, unrelated to the Zeiss deconvolution approach - an alternative).
>> Preibisch S., Amat F., Stamataki E., Sarov M., Myers E., Tomancak P. (2013) Efficient bayesian multi-view deconvolution Nature Methods, 7, 418–419 http://arxiv.org/abs/1308.0730
>>
>> http://fiji.sc/Multi-View_Deconvolution
>>
>> visualisation solution for multi-view SPIM data - BigDataViewer (to be published soon)
>> http://fiji.sc/BigDataViewer
>>
>> and finally lots of useful information can be found on the OpenSPIM wiki, particularly in the section dealing with the EMBO course on light sheet microscopy that we organised in Dresden this summer
>>
>> http://openspim.org/EMBO_practical_course_Light_sheet_microscopy
>>
>> Don't hesitate to contact us if you have further questions.
>
> I've read already a bit on OpenSPIM, it looks very interesting! Do
> I understand correctly that OpenSPIM does not develop or host software
> for registration/fusion/deconvolution itself, the software recommended
> at OpenSPIM is basically the same Fiji modules you mention also above?
> It makes perfect sense, but I'm trying to make sure I'm not overlooking
> any (reasonably good) software. The only other software for fusion I
> could find is "Spatially-Variant Lucy-Richardson Deconvolution for
> Multiview Fusion of Microscopical 3D Images" by Maja Temerinac-Ott et
> al, see http://lmb.informatik.uni-freiburg.de/Publications/2011/BRT11/
>
> Is there any other (commercial or non-commercial) option you would be
> aware of?
>
> All the best and thanks a lot,
>
>    Mario
>
>
>
> --
> Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
> Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
> D-81669 München                          http://www.biodataanalysis.de/
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html


--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: How does Multiview-Reconstruction compare to Zeiss?

Mario Emmenlauer-3
Hi Stephan,

thanks for the additional input! A few questions below:

On 10.11.2014 15:59, Stephan Preibisch wrote:
> I am just back from vacation and just wanted to add my thoughts here. My multi-view reconstruction software allows the registration based on beads or/and sample features (spotty things within the sample like nuclei, but can be any kinds of bright or dark spots). Right now, there is no “classical” intensity correlation method as it is very slow as affine transformation models need to be estimated to get it right. But the plan is to add it at some point, the software framework supports it.
>
> Regarding the dual sided illumination. From my experience at the EMBO course, it is not advisable to do online dual side fusion. For most samples, the aberration introduced by the specimen itself is significant (I saw around 5-20 pixels) and you will introduce blur if you do online dual sided fusion. Ideally treat them as views that need to be registered as any other views. I am in the process of writing a howto do this right, but it will take more time.

So is the Zeiss Zen software doing "online dual side fusion"? Then it
would be highly preferable to follow your newly-written guide instead
of continuing to use the Zeiss Zen dual side fusion...


> There is an old (plugins > SPIM Registration) and a new version (plugins > Multiview Reconstruction, add it by activating the BigDataViewer update site in Fiji). The old one is completely cluster processable, Pavel sent the link. However, the software itself is outdated by now as it only supports bead-based registration. The new version is much more flexible, powerful, and integrated into the BigDataViewer. Cluster processing already works for the new one, but only for fusing/deconvolving the data. The remaining problem here is that the new version is based on an XML file describing the dataset (same as the BigDataViewer), so when this XML file is edited by segmenting features or writing the affine matrices, some additional software is required to merge the results from different timepoints when processed in paralell. This will be available most likely in December. That’s why it works for fusing, as the XML is only read, not written.

December would be an good time line for us, please keep me posted, and
thanks for your nice work!


> The registration quality using beads should be somewhat comparable for my old plugin, new plugin, and the zeiss software as they are all based on the algorithm, just implemented differently (http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html <http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html>). On a side note, my new plugin also supports CUDA processing. However, the new plugin allows additional registration using affine models based on sample features, i.e. the new plugin is capable of somewhat correcting for sample-induced aberrations (can be done in a second step after the beads registration or only using sample features). Therefore it does produce better results than my old plugin (and I think also the Zeiss software) as they do not support that second registration step.

How much does the CUDA speed up the process of the full fusion with all
steps, start to end (including beads segmentation, bead registration,
fine registration and deconvolution/fusion)?

I have never tried this myself, so if you can post some rough numbers
and some specs from your test machine I would be happy. Especially, I would
be curious if its better to do everything on one powerful CUDA-enabled high-
memory-machine, or better to employ hundreds of cluster nodes? We currently
aim for the cluster, unless you advise that one CUDA-machine might out-
perform many cluster nodes (also because it avoids network transfer of the
files to the cluster)?

All the best and thanks,

   Mario



> If you have more questions I am happy to answer.
>
> Have a great day,
> Stephan
>
> ---
>
> Dr. Stephan Preibisch
> HFSP Fellow
> Robert H. Singer / Eugene Myers lab
>
> Albert Einstein College of Medicine / HHMI Janelia Farm / MPI-CBG
>
> email: [hidden email] <mailto:[hidden email]> / [hidden email] <mailto:[hidden email]> / [hidden email] <mailto:[hidden email]>
> web: http://www.preibisch.net/ <http://fly.mpi-cbg.de/preibisch>
>> On Nov 5, 2014, at 7:11 , Mario Emmenlauer <[hidden email]> wrote:
>>
>> Dear Pavel,
>>
>> thanks a lot for the detailed and helpful answer! Below more:
>>
>> On 05.11.2014 11:07, Pavel Tomancak wrote:
>>>> I'm trying to automate the workflow from a Zeiss Lightsheet Z.1 to
>>>> the registered/fused/deconvolved volume. Previously we used the Zeiss
>>>> software, but I understand that this can not easily be scripted and/or
>>>> distributed on a cluster?
>>>>
>>>> What is the best replacement that is scriptable and cluster-runnable?
>>>
>>> In addition to what you already found, there is also this web page
>>>
>>> http://fiji.sc/SPIM_Registration_on_cluster
>>>
>>> which details how to run Fiji's SPIMage processing pipelines on a cluster (registration, fusion, deconvolution and HDF5 saving).
>>
>> I will check this out, thanks!
>>
>>
>>>> I've found http://fiji.sc/Multiview-Reconstruction (MR) and it looks
>>>> very promising, but how does it exactly compare to the Zeiss software?
>>>
>>> We worked very closely with Zeiss when developing the Fiji plugins. The Zeiss software is to a large degree based on our published research:
>>>
>>> Preibisch S., Saalfeld S., Schindelin J., Tomancak P. (2010) Software for bead-based registration of selective plane illumination microscopy data. Nature Methods 7:418-419
>>>
>>> In ZEN it has been implemented by Zeiss developers independently and since it is not published and the source code is not available I cannot say how similar it is to our implementation when it comes to details. The general approach is similar. As far as I know, Zeiss software does not use global optimisation.
>>>
>>>> I understand Zeiss does not require beads, MR does?
>>>
>>> Zeiss software can do the reconstruction with and without beads. Similarly the Fiji plugins can either use beads or segmentation in the sample (e.g. nuclei) to achieve the registration. Sometimes it is necessary to use both, first bead-based followed by nuclei based registration. For example look here
>>>
>>> http://openspim.org/TriboliumExtraembryonicMembranes
>>
>> Thanks, this is very enlightening and helpful. I will check
>> this in more detail and see how we can profit from segmentation
>> versus beads...
>>
>>
>>>> And our workflow
>>>> uses DualSide Fusion, is that supported with MR?
>>>
>>> The DualSide Fusion in done internally in ZEN. We simply start our pipelines with the fused data (i.e. left and right lightsheet merged). It is important to save the data from ZEN as one file per view per time point. This is currently the only dataset that Fiji Bioformat importer can reliably open. Efforts to make it more robust are underway.
>>
>> My biologists report that the DualSide Fusion with the Zeiss Zen
>> Software is sometimes done while recording, but this only works for
>> small Z-stacks for them, because for big stacks it takes the computer
>> too long and imaging gets delayed. So I understand that this is a
>> bottleneck for us - did you get this solved somehow?
>>
>>
>>>> Finally, how do they
>>>> compare in terms of quality of the result, is one or the other software
>>>> "better" in any aspect by a reasonable margin, or do they compare on
>>>> par?
>>>
>>> Ok, this is very hard to answer. We are academic researchers and of course we believe that our approach is the best ;-). We have not done rigorous benchmarks against Zeiss software. Anecdotally, on some datasets Fiji works better on others ZEN. You will have to try it for yourself.
>>>
>>> Here are some additional resources that you may find useful.
>>>
>>> A book chapter describing the SPIMage processing pipeline formally, tutorial style (download it from here http://www.mpi-cbg.de/nc/research/research-groups/pavel-tomancak/papers.html)
>>> Schmied C., Stamataki E., Tomancak P. (2014) Open-source solutions for SPIMage processing. Methods Cell Biol., 123, pp. 505-529
>>>
>>> Multi-view deconvolution paper and software (this is, I think, unrelated to the Zeiss deconvolution approach - an alternative).
>>> Preibisch S., Amat F., Stamataki E., Sarov M., Myers E., Tomancak P. (2013) Efficient bayesian multi-view deconvolution Nature Methods, 7, 418–419 http://arxiv.org/abs/1308.0730
>>>
>>> http://fiji.sc/Multi-View_Deconvolution
>>>
>>> visualisation solution for multi-view SPIM data - BigDataViewer (to be published soon)
>>> http://fiji.sc/BigDataViewer
>>>
>>> and finally lots of useful information can be found on the OpenSPIM wiki, particularly in the section dealing with the EMBO course on light sheet microscopy that we organised in Dresden this summer
>>>
>>> http://openspim.org/EMBO_practical_course_Light_sheet_microscopy
>>>
>>> Don't hesitate to contact us if you have further questions.
>>
>> I've read already a bit on OpenSPIM, it looks very interesting! Do
>> I understand correctly that OpenSPIM does not develop or host software
>> for registration/fusion/deconvolution itself, the software recommended
>> at OpenSPIM is basically the same Fiji modules you mention also above?
>> It makes perfect sense, but I'm trying to make sure I'm not overlooking
>> any (reasonably good) software. The only other software for fusion I
>> could find is "Spatially-Variant Lucy-Richardson Deconvolution for
>> Multiview Fusion of Microscopical 3D Images" by Maja Temerinac-Ott et
>> al, see http://lmb.informatik.uni-freiburg.de/Publications/2011/BRT11/
>>
>> Is there any other (commercial or non-commercial) option you would be
>> aware of?
>>
>> All the best and thanks a lot,
>>
>>    Mario
>>
>>
>>
>> --
>> Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
>> Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
>> D-81669 München                          http://www.biodataanalysis.de/
>>
>> --
>> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--
A: Yes.
> Q: Are you sure?
>> A: Because it reverses the logical flow of conversation.
>>> Q: Why is top posting annoying in email?

Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
D-81669 München                          http://www.biodataanalysis.de/

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: How does Multiview-Reconstruction compare to Zeiss?

Stephan Preibisch
Hi Mario,

> Hi Stephan,
>
> thanks for the additional input! A few questions below:
>
> On 10.11.2014 15:59, Stephan Preibisch wrote:
>> I am just back from vacation and just wanted to add my thoughts here. My multi-view reconstruction software allows the registration based on beads or/and sample features (spotty things within the sample like nuclei, but can be any kinds of bright or dark spots). Right now, there is no “classical” intensity correlation method as it is very slow as affine transformation models need to be estimated to get it right. But the plan is to add it at some point, the software framework supports it.
>>
>> Regarding the dual sided illumination. From my experience at the EMBO course, it is not advisable to do online dual side fusion. For most samples, the aberration introduced by the specimen itself is significant (I saw around 5-20 pixels) and you will introduce blur if you do online dual sided fusion. Ideally treat them as views that need to be registered as any other views. I am in the process of writing a howto do this right, but it will take more time.
>
> So is the Zeiss Zen software doing "online dual side fusion"? Then it
> would be highly preferable to follow your newly-written guide instead
> of continuing to use the Zeiss Zen dual side fusion…
>

If you do the Zeiss online fusion or not is the choice of the person who does the acquisition. To test if it is OK to do online fusion, I would acquire two illumination directions separately, overlay them into two channels and carefully check (in XY, XZ, YZ) if direct fusion without registration is a good idea or if it should be registered. It depends on the sample and the magnification, i.e. how much the sample refracts the light - and how much this is relative to magnification used. And of course how much the highest quality is important for the analysis that you intend to do on it - maybe an error of 1px is acceptable when counting cells, but not when looking for diffraction-limited spots.

>
>> There is an old (plugins > SPIM Registration) and a new version (plugins > Multiview Reconstruction, add it by activating the BigDataViewer update site in Fiji). The old one is completely cluster processable, Pavel sent the link. However, the software itself is outdated by now as it only supports bead-based registration. The new version is much more flexible, powerful, and integrated into the BigDataViewer. Cluster processing already works for the new one, but only for fusing/deconvolving the data. The remaining problem here is that the new version is based on an XML file describing the dataset (same as the BigDataViewer), so when this XML file is edited by segmenting features or writing the affine matrices, some additional software is required to merge the results from different timepoints when processed in paralell. This will be available most likely in December. That’s why it works for fusing, as the XML is only read, not written.
>
> December would be an good time line for us, please keep me posted, and
> thanks for your nice work!

Cool, we will try to keep this timeline … it is a little bit unpredictable at the moment. But I am optimistic. Please just keep pushing us :)

>
>> The registration quality using beads should be somewhat comparable for my old plugin, new plugin, and the zeiss software as they are all based on the algorithm, just implemented differently (http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html <http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html> <http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html <http://www.nature.com/nmeth/journal/v7/n6/full/nmeth0610-418.html>>). On a side note, my new plugin also supports CUDA processing. However, the new plugin allows additional registration using affine models based on sample features, i.e. the new plugin is capable of somewhat correcting for sample-induced aberrations (can be done in a second step after the beads registration or only using sample features). Therefore it does produce better results than my old plugin (and I think also the Zeiss software) as they do not support that second registration step.
>
> How much does the CUDA speed up the process of the full fusion with all
> steps, start to end (including beads segmentation, bead registration,
> fine registration and deconvolution/fusion)?
>
> I have never tried this myself, so if you can post some rough numbers
> and some specs from your test machine I would be happy. Especially, I would
> be curious if its better to do everything on one powerful CUDA-enabled high-
> memory-machine, or better to employ hundreds of cluster nodes? We currently
> aim for the cluster, unless you advise that one CUDA-machine might out-
> perform many cluster nodes (also because it avoids network transfer of the
> files to the cluster)?

The CUDA speedup is usually just a factor of 3x or 4x (for finding beads or doing the multi-view deconvolution). It is higher when looking for larger structures in the image using the Difference-of-Gaussian detector. At the same time it does save you a lot of RAM. So it is very helpful to have it when computing and testing on a workstation as it makes things more interactive. However, for more large scale computations I would go for the cluster rather than a GPU solution - ideally of course combine both :)

Of course GPU processing does not speed up other things that take a lot of time like opening/saving files, sometimes this is more than 50% of the computation time, depending on the dataset. So if you can afford computing on SSDs, they are pretty cheap right now, this can help too. This also means that when computing on a cluster, it is important to have fast parallel access to the data as well. Otherwise you will not gain a lot.

Have a great day,
Stephan

>> If you have more questions I am happy to answer.
>>
>> Have a great day,
>> Stephan
>>
>> ---
>>
>> Dr. Stephan Preibisch
>> HFSP Fellow
>> Robert H. Singer / Eugene Myers lab
>>
>> Albert Einstein College of Medicine / HHMI Janelia Farm / MPI-CBG
>>
>> email: [hidden email] <mailto:[hidden email]> <mailto:[hidden email] <mailto:[hidden email]>> / [hidden email] <mailto:[hidden email]> <mailto:[hidden email] <mailto:[hidden email]>> / [hidden email] <mailto:[hidden email]> <mailto:[hidden email] <mailto:[hidden email]>>
>> web: http://www.preibisch.net/ <http://www.preibisch.net/> <http://fly.mpi-cbg.de/preibisch <http://fly.mpi-cbg.de/preibisch>>
>>> On Nov 5, 2014, at 7:11 , Mario Emmenlauer <[hidden email]> wrote:
>>>
>>> Dear Pavel,
>>>
>>> thanks a lot for the detailed and helpful answer! Below more:
>>>
>>> On 05.11.2014 11:07, Pavel Tomancak wrote:
>>>>> I'm trying to automate the workflow from a Zeiss Lightsheet Z.1 to
>>>>> the registered/fused/deconvolved volume. Previously we used the Zeiss
>>>>> software, but I understand that this can not easily be scripted and/or
>>>>> distributed on a cluster?
>>>>>
>>>>> What is the best replacement that is scriptable and cluster-runnable?
>>>>
>>>> In addition to what you already found, there is also this web page
>>>>
>>>> http://fiji.sc/SPIM_Registration_on_cluster
>>>>
>>>> which details how to run Fiji's SPIMage processing pipelines on a cluster (registration, fusion, deconvolution and HDF5 saving).
>>>
>>> I will check this out, thanks!
>>>
>>>
>>>>> I've found http://fiji.sc/Multiview-Reconstruction (MR) and it looks
>>>>> very promising, but how does it exactly compare to the Zeiss software?
>>>>
>>>> We worked very closely with Zeiss when developing the Fiji plugins. The Zeiss software is to a large degree based on our published research:
>>>>
>>>> Preibisch S., Saalfeld S., Schindelin J., Tomancak P. (2010) Software for bead-based registration of selective plane illumination microscopy data. Nature Methods 7:418-419
>>>>
>>>> In ZEN it has been implemented by Zeiss developers independently and since it is not published and the source code is not available I cannot say how similar it is to our implementation when it comes to details. The general approach is similar. As far as I know, Zeiss software does not use global optimisation.
>>>>
>>>>> I understand Zeiss does not require beads, MR does?
>>>>
>>>> Zeiss software can do the reconstruction with and without beads. Similarly the Fiji plugins can either use beads or segmentation in the sample (e.g. nuclei) to achieve the registration. Sometimes it is necessary to use both, first bead-based followed by nuclei based registration. For example look here
>>>>
>>>> http://openspim.org/TriboliumExtraembryonicMembranes
>>>
>>> Thanks, this is very enlightening and helpful. I will check
>>> this in more detail and see how we can profit from segmentation
>>> versus beads...
>>>
>>>
>>>>> And our workflow
>>>>> uses DualSide Fusion, is that supported with MR?
>>>>
>>>> The DualSide Fusion in done internally in ZEN. We simply start our pipelines with the fused data (i.e. left and right lightsheet merged). It is important to save the data from ZEN as one file per view per time point. This is currently the only dataset that Fiji Bioformat importer can reliably open. Efforts to make it more robust are underway.
>>>
>>> My biologists report that the DualSide Fusion with the Zeiss Zen
>>> Software is sometimes done while recording, but this only works for
>>> small Z-stacks for them, because for big stacks it takes the computer
>>> too long and imaging gets delayed. So I understand that this is a
>>> bottleneck for us - did you get this solved somehow?
>>>
>>>
>>>>> Finally, how do they
>>>>> compare in terms of quality of the result, is one or the other software
>>>>> "better" in any aspect by a reasonable margin, or do they compare on
>>>>> par?
>>>>
>>>> Ok, this is very hard to answer. We are academic researchers and of course we believe that our approach is the best ;-). We have not done rigorous benchmarks against Zeiss software. Anecdotally, on some datasets Fiji works better on others ZEN. You will have to try it for yourself.
>>>>
>>>> Here are some additional resources that you may find useful.
>>>>
>>>> A book chapter describing the SPIMage processing pipeline formally, tutorial style (download it from here http://www.mpi-cbg.de/nc/research/research-groups/pavel-tomancak/papers.html)
>>>> Schmied C., Stamataki E., Tomancak P. (2014) Open-source solutions for SPIMage processing. Methods Cell Biol., 123, pp. 505-529
>>>>
>>>> Multi-view deconvolution paper and software (this is, I think, unrelated to the Zeiss deconvolution approach - an alternative).
>>>> Preibisch S., Amat F., Stamataki E., Sarov M., Myers E., Tomancak P. (2013) Efficient bayesian multi-view deconvolution Nature Methods, 7, 418–419 http://arxiv.org/abs/1308.0730
>>>>
>>>> http://fiji.sc/Multi-View_Deconvolution
>>>>
>>>> visualisation solution for multi-view SPIM data - BigDataViewer (to be published soon)
>>>> http://fiji.sc/BigDataViewer
>>>>
>>>> and finally lots of useful information can be found on the OpenSPIM wiki, particularly in the section dealing with the EMBO course on light sheet microscopy that we organised in Dresden this summer
>>>>
>>>> http://openspim.org/EMBO_practical_course_Light_sheet_microscopy
>>>>
>>>> Don't hesitate to contact us if you have further questions.
>>>
>>> I've read already a bit on OpenSPIM, it looks very interesting! Do
>>> I understand correctly that OpenSPIM does not develop or host software
>>> for registration/fusion/deconvolution itself, the software recommended
>>> at OpenSPIM is basically the same Fiji modules you mention also above?
>>> It makes perfect sense, but I'm trying to make sure I'm not overlooking
>>> any (reasonably good) software. The only other software for fusion I
>>> could find is "Spatially-Variant Lucy-Richardson Deconvolution for
>>> Multiview Fusion of Microscopical 3D Images" by Maja Temerinac-Ott et
>>> al, see http://lmb.informatik.uni-freiburg.de/Publications/2011/BRT11/
>>>
>>> Is there any other (commercial or non-commercial) option you would be
>>> aware of?
>>>
>>> All the best and thanks a lot,
>>>
>>>   Mario
>>>
>>>
>>>
>>> --
>>> Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
>>> Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch
>>> D-81669 München                          http://www.biodataanalysis.de/
>>>
>>> --
>>> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>>
>>
>> --
>> ImageJ mailing list: http://imagej.nih.gov/ij/list.html <http://imagej.nih.gov/ij/list.html>
>>
>
> --
> A: Yes.
>> Q: Are you sure?
>>> A: Because it reverses the logical flow of conversation.
>>>> Q: Why is top posting annoying in email?
>
> Mario Emmenlauer BioDataAnalysis             Mobil: +49-(0)151-68108489
> Balanstrasse 43                    mailto: mario.emmenlauer * unibas.ch <http://unibas.ch/>
> D-81669 München                          http://www.biodataanalysis.de/ <http://www.biodataanalysis.de/>
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html <http://imagej.nih.gov/ij/list.html>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html