bfconverter nd2 to tif

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

bfconverter nd2 to tif

Greg
Hi,

I am trying to convert (huge) nd2 files to the ImageJ native tif(f?) format. I am using the Bio-Formats commandline tool bfconvert (http://www.openmicroscopy.org/site/support/bio-formats5.0/users/comlinetools/conversion.html), which behaves very nice with respect to memory usage.

I tried both

bfconvert -bigtiff input.nd2 output.tif

and

bfconvert -bigtiff input.nd2 output.tiff

but I do not think that the outputs are in a native ImageJ format. First, when I open one of the produced output files in Fiji, the Bio-Formats Input Option Dialog opens, which should not be the case when having a 'pure' tif, right ?! And second also the behaviour (aka lagginess) of Fiji is exactly like when working directly on nd2 files.

An additional problem is, that the output is no hyperstack anymore, but that I can fix later..

So how do I convert nd2 files to 'native tif ImageJ' on the commandline ?

Greets,
Greg

Reply | Threaded
Open this post in threaded view
|

Re: bfconverter nd2 to tif

ctrueden
Hi Greg,

> I tried both
>
> bfconvert -bigtiff input.nd2 output.tif
>
> and
>
> bfconvert -bigtiff input.nd2 output.tiff
>
> but I do not think that the outputs are in a native ImageJ format.

Correct. The bfconvert command line tool writes to BigTIFF format, not
ImageJ's "raw" format for large TIFFs. The two are very different. The only
tool that can read the large "raw" TIFFs is ImageJ1. There are currently no
plans for Bio-Formats to support this pseudo-TIFF format.

> when I open one of the produced output files in Fiji, the Bio-Formats
> Input Option Dialog opens, which should not be the case when having a
> 'pure' tif, right ?!

To clarify: I would define a "pure" TIFF as one that is less than 4GB in
size, with 32-bit offsets, according to the official specification. For
files over 4GB, your options are:

A) BigTIFF, which is a "pure" 64-bit TIFF file, and the accepted worldwide
standard; or
B) ImageJ's "raw" TIFF, which not actually a TIFF, but rather raw pixels
preceded by a minimal TIFF header.

> And second also the behaviour (aka lagginess) of Fiji is exactly like
> when working directly on nd2 files.

Can you give more details? Is the delay when first opening the file? And
happens even when "Use virtual stack" is checked? If so, then the problem
is that Bio-Formats parses all TIFF IFDs upfront, which can incur a
significant delay for large files with many planes. There are certainly
things that can be done on the Bio-Formats side to improve the performance
there; I suggest starting a discussion on the ome-users mailing list [1].

Regards,
Curtis

[1] http://lists.openmicroscopy.org.uk/mailman/listinfo/ome-users/


On Tue, Mar 24, 2015 at 9:02 AM, Greg <[hidden email]> wrote:

> Hi,
>
> I am trying to convert (huge) nd2 files to the ImageJ native tif(f?)
> format.
> I am using the Bio-Formats commandline tool bfconvert
> (
> http://www.openmicroscopy.org/site/support/bio-formats5.0/users/comlinetools/conversion.html
> ),
> which behaves very nice with respect to memory usage.
>
> I tried both
>
> bfconvert -bigtiff input.nd2 output.tif
>
> and
>
> bfconvert -bigtiff input.nd2 output.tiff
>
> but I do not think that the outputs are in a native ImageJ format. First,
> when I open one of the produced output files in Fiji, the Bio-Formats Input
> Option Dialog opens, which should not be the case when having a 'pure' tif,
> right ?! And second also the behaviour (aka lagginess) of Fiji is exactly
> like when working directly on nd2 files.
>
> An additional problem is, that the output is no hyperstack anymore, but
> that
> I can fix later..
>
> So how do I convert nd2 files to 'native tif ImageJ' on the commandline ?
>
> Greets,
> Greg
>
>
>
>
>
> --
> View this message in context:
> http://imagej.1557.x6.nabble.com/bfconverter-nd2-to-tif-tp5012133.html
> Sent from the ImageJ mailing list archive at Nabble.com.
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: bfconverter nd2 to tif

Greg
Hi Curtis,

ok thanks for the clarification about the tif formats. So my target format would definitely be the ImageJs 'raw' TIFF, as all operations seem to be much faster with that format. I still see no way of doing the conversion without loading the whole stack into memory. And with 'lagginess' I mean especially the performance after loading the file, so switching to next frame for example. So bascically I see no performance difference in between BigTIFF and the proprietary nd2 format, and it is also essentially the same if I open them as virtual stack or not.. so ya, I migth start a thread on the ome site you pointed me to.

The Hyperstack problem turned out to be a bit more serious, as again to make the output of the bfconverter a Hyperstack, I see no other way as loading the whole Stack into memory as the Hyperstack information is lost in BigTIFF format ?!

So instead of doing nice image analysis with jython in Fiji, I am now more or less lost in a jungle of formats and memory problems, I guess that is the fate of the life cell image analyst :P

Best,
Greg
Reply | Threaded
Open this post in threaded view
|

Re: bfconverter nd2 to tif

ctrueden
Hi Greg,

> So my target format would definitely be the ImageJs 'raw' TIFF, as all
> operations seem to be much faster with that format.

I understand that your goal is to maximize performance. But please realize
that once the pixels have been read into memory, it does not matter how the
data was stored on disk. There are actually two major cases here:

A) Using a virtual stack. In that case, the file format _could_ matter
because it might be faster to read a plane on demand from a TIFF than from
an ND2. I haven't benchmarked this though, so I don't know.

B) Using swap space as virtual RAM. In this case, ImageJ will read all the
planes upfront, and then start caching them back out to disk as your
virtual RAM usage exceeds your actual amount of RAM. I expect that when you
do things this way, the data is really slow to load, right? For this reason
and others, I recommend using the virtual stack feature rather than this
swap space approach.

In other words: even if you were able to write out the data as an ImageJ1
raw pseudo-TIFF, I doubt it would magically solve all your problems.

> with 'lagginess' I mean especially the performance after loading the
> file, so switching to next frame for example.

With the Bio-Formats "Use virtual stack" option for your ND2 data, how long
does it take to switch to a new frame? It is supposed to take a fraction of
a second—or at most, on the order of a second or two for very large image
planes.

How large are your image planes? More than 1GPix each? If so, that would
explain the lagginess.

> I see no other way as loading the whole Stack into memory

There is another way: the "Specify range for each series" checkbox. With
that, you can open only a subset of your image planes at a time, perform
your plane-wise analysis, write out the results, rinse and repeat. Then you
can keep your processing within the amount of available RAM you have.

Another solution is simply to buy more RAM. It is pretty easy to get a
machine with more than 128GB of RAM these days, which accommodates very
large datasets.

> the Hyperstack information is lost in BigTIFF format

Try exporting to a .ome.tiff rather than only a .tiff. The vanilla TIFF
format (BigTIFF or otherwise) does not support >3 dimensions. But OME-TIFF
does.

> instead of doing nice image analysis with jython in Fiji, I am now
> more or less lost in a jungle of formats and memory problems, I guess
> that is the fate of the life cell image analyst

Sorry, I know it is frustrating. We definitely want to provide better
support for big image processing, TIFF or otherwise -- it is one of
ImageJ2's major goals. But it is not a trivial thing. To that end, I filed
an issue to track the various "better support for big TIFF files" scenarios
people have raised:

    https://github.com/imagej/imagej/issues/117

Regards,
Curtis


On Tue, Mar 24, 2015 at 10:35 AM, Greg <[hidden email]> wrote:

> Hi Curtis,
>
> ok thanks for the clarification about the tif formats. So my target format
> would definitely be the ImageJs 'raw' TIFF, as all operations seem to be
> much faster with that format. I still see no way of doing the conversion
> without loading the whole stack into memory. And with 'lagginess' I mean
> especially the performance after loading the file, so switching to next
> frame for example. So bascically I see no performance difference in between
> BigTIFF and the proprietary nd2 format, and it is also essentially the same
> if I open them as virtual stack or not.. so ya, I migth start a thread on
> the ome site you pointed me to.
>
> The Hyperstack problem turned out to be a bit more serious, as again to
> make
> the output of the bfconverter a Hyperstack, I see no other way as loading
> the whole Stack into memory as the Hyperstack information is lost in
> BigTIFF
> format ?!
>
> So instead of doing nice image analysis with jython in Fiji, I am now more
> or less lost in a jungle of formats and memory problems, I guess that is
> the
> fate of the life cell image analyst :P
>
> Best,
> Greg
>
>
>
> --
> View this message in context:
> http://imagej.1557.x6.nabble.com/bfconverter-nd2-to-tif-tp5012133p5012136.html
> Sent from the ImageJ mailing list archive at Nabble.com.
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: bfconverter nd2 to tif

Greg
Hi Curtis,

thanks for your detailed answer ! I think you and all the people behind ImageJ/FIJI and OME are doing wonderful things to keep stuff doable and open for everyone. I am just nerved by proprietary formats which allow you to do almost nothing (Nikon Viewer is good for looking at holiday snapshots but nothing else..) !

I more or less jumped into a life cell imaging project, and I am having hard times to explain to the experimentalists why things like the swap space approach work so badly. But with your explanations I feel on safer ground now.

So ya, we have around 3500x3500 pixel slices with three channels with up to 750 frames. One slice is around 25mb and one frame therefore around 75mb. That makes around 55gb in total. Maybe that is just too much and we have to rethink our experimental design. Right now the nd2 file format seems to be broken completely on at least one machine, so conversion to a more reliable format seems to be the only way to keep things straight. I am still able to open a nd2 file on my machine in virtual hyperstack mode, and frame switching takes about 1sec. I converted them like you advised to ome.tiff format, and it takes about 30sec to open them in virtual hyperstack mode, switching takes the same time as for the nd2 format.

Right now I am stuck, because my jython plugins do not run anymore if I have a nd2 or a ome.tiff file opened in virtual hyperstack mode. I tried to start Fiji in the terminal to catch some errors, but no output. When I restart my plugins like a few times, then finally they come through and open all at once and I get a Java heap error, very weird.  When I monitor the RAM, it is around 1.3Gb used by Fiji when the virtual hyperstack finally opened the first slice. When I fire up a very basic plugin (GoTo_Coordinate.py):

from ij import IJ,ImagePlus
from java.awt import Color
from ij.gui import GenericDialog,Overlay,Roi

Color1 = Color(255,5,20,255)
def getOptions():
  gd = GenericDialog("Spatiotemporal Cell Coordinates")
  gd.addNumericField("Frame", 1, 0)  # show 2 decimals                                                                    
  gd.addNumericField("X-Coordinate", 500, 0)  # show 2 decimals                                                          
  gd.addNumericField("Y-Coordinate", 500, 0)  # show 2 decimals                                                          
  gd.addNumericField("Zoom Level", 150, 0)  # show 2 decimals                                                            
  gd.addCheckbox("Mark Coordinates", True)
  gd.showDialog()


  # Read out the options                                                                                                  
  frame = gd.getNextNumber()
  xcoord = gd.getNextNumber()
  ycoord = gd.getNextNumber()
  zoom = gd.getNextNumber()
  mark = gd.getNextBoolean()

  return int(frame),int(xcoord),int(ycoord),float(zoom),mark


frame,xcoord,ycoord,zoom,mark = getOptions()
orig = IJ.getImage()

# wors also for less than 3 channels present                                                                              
orig.setPosition(3,1,frame)
IJ.run(orig,"Set... ","zoom=" + str(zoom) + " x="+str(xcoord) + " y="+str(ycoord))

if mark:
  roi = Roi(xcoord-2,ycoord-2,3,3)
  roi.setStrokeColor(Color1)
  roi.setStrokeWidth(3)
  orig.setRoi(roi)

RAM usage jumps to 13Gb and beyond, so maybe Fiji loads the whole stack in the background ? For that little example plugin above, this happens before(!) I see the getOptions Dialog.

Best,
Greg