Processing huge files

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Processing huge files

Greg
Hi,

I want to convert huge .nd2 files (around 60Gb each) to tif, as all operations are faster on the tif format.
The conversion itself is quite painful, as coming from this thread:

http://imagej.1557.x6.nabble.com/Virtual-Memory-in-Headless-Mode-td5011730.html#a5011733

this bug seems not to be fixed, as I still get a Java heap memory error when trying to do the conversion headless. So the conversion needs to be done with the Fiji GUI open.

But nevertheless, the problem is the same also with the hopefully soon working headless mode: having like 40Gb of virtual memory slows down (any?) my PC quite hard. So right now I can open the huge file slice by slice via VirtualStack, but I can not write them slice by slice. So that means I first have to load all sclices into memory (=a Stack) before writing them as multi page tif to disk.

So my question is, is it somehow possible to append a single slice to a disk resident file (Virtual Stack) ?
A workaround might be to write out single tifs for every slice and then somehow do the tif concatenation, but I have no idea how exactly this can be done.

Greets,
Greg
Reply | Threaded
Open this post in threaded view
|

Re: Processing huge files

ctrueden
Hi Greg,

> this bug seems not to be fixed, as I still get a Java heap memory
> error when trying to do the conversion headless.

Unfortunately, the bug has not yet been uploaded to the ImageJ update site.
More testing and investigation is needed before doing so; it may be a few
more weeks.

For those interested, progress can be followed at:
https://github.com/imagej/imagej-launcher/issues/16

> So right now I can open the huge file slice by slice via VirtualStack,
> but I can not write them slice by slice.

Why not? This is supposed to work. What specifically happens if you use
File > Save As > TIFF...? Or Plugins > Bio-Formats > Bio-Formats Exporter?

Regards,
Curtis

On Mon, Mar 16, 2015 at 2:30 PM, Greg <[hidden email]> wrote:

> Hi,
>
> I want to convert huge .nd2 files (around 60Gb each) to tif, as all
> operations are faster on the tif format.
> The conversion itself is quite painful, as coming from this thread:
>
>
> http://imagej.1557.x6.nabble.com/Virtual-Memory-in-Headless-Mode-td5011730.html#a5011733
>
> this bug seems not to be fixed, as I still get a Java heap memory error
> when
> trying to do the conversion headless. So the conversion needs to be done
> with the Fiji GUI open.
>
> But nevertheless, the problem is the same also with the hopefully soon
> working headless mode: having like 40Gb of virtual memory slows down (any?)
> my PC quite hard. So right now I can open the huge file slice by slice via
> VirtualStack, but I can not write them slice by slice. So that means I
> first
> have to load all sclices into memory (=a Stack) before writing them as
> multi
> page tif to disk.
>
> So my question is, is it somehow possible to append a single slice to a
> disk
> resident file (Virtual Stack) ?
> A workaround might be to write out single tifs for every slice and then
> somehow do the tif concatenation, but I have no idea how exactly this can
> be
> done.
>
> Greets,
> Greg
>
>
>
>
> --
> View this message in context:
> http://imagej.1557.x6.nabble.com/Processing-huge-files-tp5012001.html
> Sent from the ImageJ mailing list archive at Nabble.com.
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Processing huge files

Greg
Hi Curtis,

thx for your answer and sorry for my delay. So of course writing out slice by slice works, if I write every slice as a new file. But what I want is to write as Stack out slice by slice to one big file. So in principle to save RAM I want to do the following:

open one slice from Virtual Stack

do some processing

Append that slice to a disk resident Stack.

So that only one slice has to be in memory at a time, but in the end I have a processed disk resident copy of the huge input Virtual Stack.

Best,
Greg

Reply | Threaded
Open this post in threaded view
|

Re: Processing huge files

ctrueden
Hi Greg,

> what I want is to write as Stack out slice by slice to one big file.

A couple of suggestions to try.

Firstly, have you tried the "bfconvert" command line tool [1]?

It would allow you to convert your ND2 files to (Big-)TIFF without needing
ImageJ at all. It supports BigTIFF via the "-bigtiff" flag.

Secondly, you can use the Bio-Formats API to write out the data plane by
plane [2] to a single large file (TIFF or otherwise). You would need to
write Java code, or use a non-macro scripting language that can access Java
API (e.g., Jython or Groovy) [3].

Regards,
Curtis

P.S. Some day ImageJ2 will be far enough along that the paradigm will
shift: "opening" the data will always be virtual, with ImageJ not needing
to read all planes in advance; processing the data will page planes in and
out of RAM as needed transparently; and saving/export will work as usual,
with very little RAM overhead since it is also done fully plane-by-plane.
We are getting close, but not quite there yet...

[1]
http://openmicroscopy.org/site/support/bio-formats/users/comlinetools/conversion.html
[2]
http://openmicroscopy.org/site/support/bio-formats/developers/export.html
[3] http://imagej.net/Scripting


On Wed, Mar 18, 2015 at 5:25 AM, Greg <[hidden email]> wrote:

> Hi Curtis,
>
> thx for your answer and sorry for my delay. So of course writing out slice
> by slice works, if I write every slice as a new file. But what I want is to
> write as Stack out slice by slice to one big file. So in principle to save
> RAM I want to do the following:
>
> open one slice from Virtual Stack
>
> do some processing
>
> Append that slice to a disk resident Stack.
>
> So that only one slice has to be in memory at a time, but in the end I have
> a processed disk resident copy of the huge input Virtual Stack.
>
> Best,
> Greg
>
>
>
>
>
> --
> View this message in context:
> http://imagej.1557.x6.nabble.com/Processing-huge-files-tp5012001p5012032.html
> Sent from the ImageJ mailing list archive at Nabble.com.
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Processing huge files

Greg
Hi Curtis,

ya thx, I will try to use the bfconvert tool, that sounds perfect !
Also as I use jython already, I will have a look at the BIO Formats export API.

Thanks and Greets,
Greg