Giter Club home page Giter Club logo

bigstitcher's People

Contributors

ctrueden avatar hoerldavid avatar imagejan avatar mzouink avatar stephanpreibisch avatar tpietzsch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bigstitcher's Issues

Why is BigStitcher version stuck at 0.2.10 in Fiji?

I've seen an older thread mentioning problems with the update site not offering the latest version of this plugin (currently at 0.3.10). That was back in 2017-18. What is going on? Is there another update site that we can use?

Auto-color different angles in MultiView Mode

Hello,

One feature from the Multiview Reconstruction plugin that I miss the most when using BigStitcher is the auto-coloring of different angles. When two (or more) angles are selected in the Multiview Reconstruction application they get different colors, but this does not happen in the MultiView Mode of BigStitcher. And, if I set the colors manually, they are not persistent (eg colors are lost when clicking on a different view). Auto-coloring is quite handy to check for the registration quality. Any chance this might be implemented back in BigStitcher?

Cheers,
Bruno

Deconvolution with BigStitcher

Hello,

BigStitcher has been amazing for our cleared tissue imaging. I now am interested in exploring the deconvolution capabilities. I was wondering:

  1. Our data is single view (not multi). Is it possible to still use BigStitcher to the deconvolution?
  2. Is it possible to define the PSF from an external file/z-stack? Instead of identifying interest points within the dataset itself.
  3. Are there any extra steps/flags necessary to activate the GPU acceleration?

Thanks!
Adam

Limiting CPU usage

Hello,
We are running BigStitcher on the Administrator account of our lab server (running Windows Server 2016). In Fiji, I have limited the resources to use only 50% of the server's CPU and RAM. When I run the fusion step in BigStitcher, I see that the RAM is limited, however the CPU does not seem to be limited. And it always uses all cores on the server. Is this a bug - or is there another way to explicitly limit how much CPU BigStitcher uses?
Thanks!
Adam

Responsiveness Decreases with Number of Image Stacks

BigStitcher is a great and truly enabling program. Thank you for taking the time and effort to build and maintain such a program.

I am currently stitching a 2 color data set with 1009 image tiles, each consisting of 2048x2048x119 voxels. I have noticed that some of the features don't seem to scale well as the number of image stacks increase, and I was hoping to see if there is anything I could do to improve performance. For example, the ImageJ log window does not always update or provide text, and thus I do not know how much progress the program has made. I have seen this with smaller data sets, but this one is particularly problematic.

Additionally, I have noticed particularly low CPU usage during the pairwise stitching on these data. I've been hovering around 10-15% CPU usage for 48 hours now (Xeon Gold 5120 CPU). In this particular case, the xml and .h5 files are located on a mapped network drive via a 10G ethernet connection (where I have achieved faster read/write speeds than with my HD and SSDs).

Any advice would be greatly appreciated.

Thank you,
Kevin Dean

xy stage drift

Hi,

I'm currently trying out BigStitcher on image stacks acquired by light sheet. In my current dataset however, there seems to be significant drift in xy along each stack which is giving suboptimal stitching results. Does BigStitcher treat slices individually to correct this type of drift or are transformations optimized based on the entire stacks?

Thanks,
Bill

Fusing with down-sampling (bug?)

I'm not sure if it is a bug, or this is a underlying reason why this should be the case, but have noticed something strange when I fuse with down-sampling. For my test, I have been saving a single HDF5 / XML file for BigStitcher in two ways:

  1. At full resolution.
  2. With 2x down-sampling in the HDF5 file itself, i.e. resolution level 0 is actually 2x down-sampled already.

Then I have been fusing these two files in BigStitcher, the first with 2x down-sampling, the second with 1x down-sampling (since I already 2x down-sampled the data in the input HDF5 iteslf). What is interesting is that the fusing times are the following:

  1. ~5.5 hours
  2. ~2.0 hours

Is there a reason the second case is almost twice as fast? In the end, the fused datasets are the same size. And in the first case, the HDF5 file contains a pyramid of resolutions, including resolution level 1 which is 2x down-sampled, so it shouldn't be doing any additional computations?

Thanks!
Adam

Color visualization in BDV and BigStitcher (?)

This is less of an issue and more of an inquiry/feature request. We are interested in color-mapping our two-color channel datasets and I was wondering how difficult it would be for this to be implemented in BDV/BigStitcher. I am not sure which color visualization model is currently used in BDV (additive, multiplicative?) but we would like to calculate the visualized RGB values based on a Beer-Lambert model as:

R = 255exp(-A1I1/M1)exp(-B1I2/M2)
G = 255exp(-A2I1/M1)exp(-B2I2/M2)
B = 255exp(-A3I1/M1)exp(-B3I2/M2)

where (A1,A2,A3) are sRGB values of dye A and (B1,B2,B3) are sRGB values of dye B, and M1/M2 are normalizing scalars for images I1 and I2. This converts the two grayscale channels into RGB images which mimic brightfield microscopy.

Based upon this publication: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0159337.

Thanks!
Adam

Increase of memory during the optimization step

Hi,

I'm using BigStitcher to stitch 480 tiles weighted 290 Gb (~600 Mb per tile). The tile is multi-channels volume with size in x, y, z and c is 738x738x86x3. The memory available for imagej is 100 Gb. Here are steps of stitching:

  • define new dataset: virtual images, check stack sizes, resave as HDF5 (subsampling factors {{4 4 2}}, hdf5 chunk sizes {{16 16 16}}
  • read tile locations from file
  • pairwise shifts calculation: repeat for each channel
  • remove links based on r values (<0.4): repeat for each channel
  • optimization: two rounds using metadata to align unconnected tiles (channel: treat individually, tiles: compare)

All steps run well except for the optimization: the memory increases over time and reaches out the memory limit. I'm not sure that 100 Gb of memory is enough for stitching or maybe due to the bad configurations at some steps.

Best regards,
Son

Define dataset in batch processing mode

Hi,
Thanks for your great effort of big stitching. I'm trying to switch to "batch processing" mode. However, it seems that the option "Move Tiles to Grid (Interactive)" in the step "Define Dataset" doesn't work. In addition, the option "Read locations from file" is not available in "batch processing" mode.

java.io.FileNotFoundException on windows after stichting

I just stitched a nice dataset but cannot save under windows.

See movie: https://cloud.mpi-cbg.de/index.php/s/1oPnAOcxXGsLfNG

mpicbg.spim.data.SpimDataIOException: java.io.FileNotFoundException: \fileserver.mpi-cbg.de\SarovHFSP\2018-09-04 Andre SD2\NRDE3nip2-1\dataset.xml (The system cannot find the path specified)
	at mpicbg.spim.data.generic.XmlIoAbstractSpimData.save(XmlIoAbstractSpimData.java:109)
	at net.preibisch.mvrecon.fiji.spimdata.XmlIoSpimData2.save(XmlIoSpimData2.java:131)
	at net.preibisch.mvrecon.fiji.spimdata.XmlIoSpimData2.save(XmlIoSpimData2.java:52)
	at net.preibisch.stitcher.gui.StitchingExplorerPanel.saveXML(StitchingExplorerPanel.java:1093)
	at net.preibisch.stitcher.gui.StitchingExplorerPanel$4.actionPerformed(StitchingExplorerPanel.java:530)
	at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2022)
	at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2348)
	at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:402)
	at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:259)
	at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:252)
	at java.awt.Component.processMouseEvent(Component.java:6539)
	at javax.swing.JComponent.processMouseEvent(JComponent.java:3324)
	at java.awt.Component.processEvent(Component.java:6304)
	at java.awt.Container.processEvent(Container.java:2239)
	at java.awt.Component.dispatchEventImpl(Component.java:4889)
	at java.awt.Container.dispatchEventImpl(Container.java:2297)
	at java.awt.Component.dispatchEvent(Component.java:4711)
	at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4904)
	at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4535)
	at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4476)
	at java.awt.Container.dispatchEventImpl(Container.java:2283)
	at java.awt.Window.dispatchEventImpl(Window.java:2746)
	at java.awt.Component.dispatchEvent(Component.java:4711)
	at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:760)
	at java.awt.EventQueue.access$500(EventQueue.java:97)
	at java.awt.EventQueue$3.run(EventQueue.java:709)
	at java.awt.EventQueue$3.run(EventQueue.java:703)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
	at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:84)
	at java.awt.EventQueue$4.run(EventQueue.java:733)
	at java.awt.EventQueue$4.run(EventQueue.java:731)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
	at java.awt.EventQueue.dispatchEvent(EventQueue.java:730)
	at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205)
	at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
	at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
	at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
	at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
	at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: java.io.FileNotFoundException: \fileserver.mpi-cbg.de\SarovHFSP\2018-09-04 Andre SD2\NRDE3nip2-1\dataset.xml (The system cannot find the path specified)
	at java.io.FileOutputStream.open0(Native Method)
	at java.io.FileOutputStream.open(FileOutputStream.java:270)
	at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
	at java.io.FileOutputStream.<init>(FileOutputStream.java:101)
	at mpicbg.spim.data.generic.XmlIoAbstractSpimData.save(XmlIoAbstractSpimData.java:105)
	... 40 more

Separate downsampling for XY and Z in fusion

In our large microscopy grids at LOCI, we often want to downsample the XY tiles, but there are relatively few slices in Z. It would be very helpful to have the option to downsample at different resolutions.

How feasible is it to separate downsampling in XY with Z during the fusion step?

I've started parsing through the code logic in DownsampleTools/FusionTools, and it looks like there's unused scaling transforms for the image (use) and the bounding box (use) that can do the trick. It would just need to be added to the code/GUI as an option. Am I correct to the difficulty here, or were these functions intended for something else?

We may try to do this fix ourselves, but it would be good to know if it is as simple as it looks. I will probably further investigate myself in a week, once I get back from a trip.

NullPointerException when cancelling macro-recordable dialog

When choosing the option Move Tile to Grid (Macro-scriptable) in the Define Metadata for Views dialog, then cancelling the following dialog Move Tiles of Angle 0, clicking OK in the following dialog will lead to the following exception:

(Fiji Is Just) ImageJ 2.0.0-rc-61/1.51n; Java 1.8.0_66 [64-bit]; Mac OS X 10.12.5; 41MB of 3764MB (1%)
 
java.lang.NullPointerException
	at spim.fiji.datasetmanager.grid.RegularTranformHelpers.generateRegularGrid(RegularTranformHelpers.java:153)
	at spim.fiji.datasetmanager.grid.RegularTranformHelpers.applyToSpimDataSingleTP(RegularTranformHelpers.java:285)
	at spim.fiji.datasetmanager.grid.RegularTranformHelpers.applyToSpimData(RegularTranformHelpers.java:239)
	at spim.fiji.datasetmanager.FileListDatasetDefinition.createDataset(FileListDatasetDefinition.java:1066)
	at spim.fiji.plugin.Define_Multi_View_Dataset.defineDataset(Define_Multi_View_Dataset.java:111)
	at spim.fiji.plugin.queryXML.LoadParseQueryXML.queryXML(LoadParseQueryXML.java:99)
	at spim.fiji.plugin.queryXML.LoadParseQueryXML.queryXML(LoadParseQueryXML.java:86)
	at spim.fiji.plugin.Stitching_Explorer.run(Stitching_Explorer.java:41)
	at ij.IJ.runUserPlugIn(IJ.java:217)
	at ij.IJ.runPlugIn(IJ.java:181)
	at ij.Executer.runCommand(Executer.java:137)
	at ij.Executer.run(Executer.java:66)
	at java.lang.Thread.run(Thread.java:745)

Blank slices at top/bottom of fused image

I've encountered an issue where fusing a grid of tiled microscopy images results in blank images at the top/bottom of the stack, which either amounts to 2 extra slices (e.g. original has 11 slices, fused has 13) or 1 extra slice and 1 dropped slice from the image (e.g. original has 3, fused has 4 but top and bottom are blank).

This also occurs if you do not preserve anistropy - the same 3 slice image will have 41 total images (correct), but the 2 top/bottom blank slices will be blank.

My guess is that it is taking the slices at different positions from the original image. E.g., if the Z position is 15/35/55 in the original image, it might be taking slices at 0/20/40/60. Here is a sample dataset displaying what I talked about above when fused.

2 angles detected with tiles LZ.1 data

Hi David and Stephan,

thanks a lot for releasing your software! Great plugin!

I produce tiled LZ.1 data. This I setup with the small Zeiss application for creating the position lists and then loading this list into the LZ.1 multiview setup. There are only tiles in there using a meandering tile acquisition (left > right; right > left). I do not specify any angles. However BigStitcher shows me 2 angles.

I load the .czi with BigStitcher > Define a new Dataset: Zeiss Lightsheet Z.1 Dataset (LOCI Bioformats)
Then I get this:

lightsheetz 1properties

I guess the problem is with bioformats. I can also post a bug report there but I don't really understand what goes wrong.

As a quick fix I am able to manipulate the XML in order to circumvent this problem, this clearly shows that there is only one view in the data. Is there anyway to avoid this problem in the first place?

Here is the log from BigStitcher:
Log.txt

I can send the metadata and the position list of the microscope.

Cheers,
Christopher

Align poorly linked tiles to nearest neighbors?

Thanks for your excellent stitcher! I had a question about tile alignment with weak links. I noticed that edge (especially corner) tiles that are mostly empty may get fairly misaligned in the z-axis, even after filtering out links will low correlation. Here is a an example after phase correlation, filtering out correlation < 0.4, and global optimization with one round, shown along the y-axis:

image

I had similar results with the two round optimization option as well. Is there a way to re-filter shifts that went beyond a given displacement? Or to fall back to alignment with the nearest neighbors when correlation is low?

Thanks!

Iterative Global Opt crashes sometimes

Optimizing...
Shuffling took 0 ms
First apply took 1 ms
resetting savedFilteringAndGrouping
Exception in thread "Thread-335" java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: mpicbg.models.NotEnoughDataPointsException: 0 data points are not enough to estimate a 3d translation model, at least 1 data points required.
at mpicbg.models.TileConfiguration.optimize(TileConfiguration.java:358)
at mpicbg.models.TileConfiguration.optimize(TileConfiguration.java:310)
at mpicbg.models.TileConfiguration.optimize(TileConfiguration.java:328)
at net.preibisch.mvrecon.process.interestpointregistration.global.GlobalOptIterative.compute(GlobalOptIterative.java:98)
at net.preibisch.mvrecon.process.interestpointregistration.global.GlobalOptTwoRound.compute(GlobalOptTwoRound.java:111)
at net.preibisch.stitcher.algorithm.globalopt.GlobalOptStitcher.processGlobalOptimization(GlobalOptStitcher.java:171)
at net.preibisch.stitcher.algorithm.globalopt.ExecuteGlobalOpt.run(ExecuteGlobalOpt.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: mpicbg.models.NotEnoughDataPointsException: 0 data points are not enough to estimate a 3d translation model, at least 1 data points required.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at mpicbg.models.TileUtil.optimizeConcurrently(TileUtil.java:219)
at mpicbg.models.TileConfiguration.optimizeSilentlyConcurrent(TileConfiguration.java:288)
at mpicbg.models.TileConfiguration.optimize(TileConfiguration.java:354)
... 7 more
Caused by: java.lang.RuntimeException: mpicbg.models.NotEnoughDataPointsException: 0 data points are not enough to estimate a 3d translation model, at least 1 data points required.
at mpicbg.models.TileUtil$3.run(TileUtil.java:206)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: mpicbg.models.NotEnoughDataPointsException: 0 data points are not enough to estimate a 3d translation model, at least 1 data points required.
at mpicbg.models.TranslationModel3D.fit(TranslationModel3D.java:89)
at mpicbg.models.Tile.fitModel(Tile.java:315)
at mpicbg.models.TileUtil$3.run(TileUtil.java:200)
... 5 more

Manual lateral correction taken into account for stitching

Hello,

I did some manual correction on the tile position in the "Stitching Explorer" window (2 tiles positions were completely inverted). After that, I launched the "Stitch dataset" command. Could you confirm me that the manual correction I did will be (at least roughly) take into account for the stitching process ?

Thank you !

Stitching Time lapse movie: unexpected results in Calculate Pairwise Shifts

Hello,

The calculation of pairwise shifts for a time lapse movie seems to show inconsistent behaviour.

I have a movie (12 tiles per time point (each tile a zstack), 70 time points) and stitch them in two ways, in both cases each time point individually.

(1) Processing the whole movie at once, with 'each time point individually' checked.
This does not work. Many correlations are calculated, but they seem to be calculated between random tile pairs. The interactive link previewer misses many links.
Settings used:
BigStitcher GUI: Select all tiles -> Calculate Pairwise Shifts ... -> Phase Correlation (expert) -> chek "show expert grouping options"
-> Select Views to Process: TimePoints: all, Tiles: all
-> Select how to Process views: TimePoints: individually, Tiles: compare
(The same issue happens when going through batch processing)

(2) Scripting and processing single time points at a time
This works. But in my understanding this should produce the same results as (1).
Settings used:
record this action:

  • BigStitcher Batch processing -> Calculate Pairwise Shifts
  • Process tile: All tiles, process timepoint: single time points [select from list]
  • Grouping: as above

then loop over time points with a python script.

Fusion - downsampling - resulting pixel size

Hi :)

just to clarify, when I perform the fusion of the stitched data using "Fuse image(s)".
I apply a downsampling of 4. Then as far as I checked xy get downsampled by that number.
However z then only by a factor of 2. Is this correct?

It would be really really really great to write the correct xyz pixel size into the resulting .tif file.

Thank you!
Christopher

set filename for fusion results

We would like to see the possibility to set the filename of the fusion. Right now we end up always with fused_tp_0_ch_0.tif(.zip)

That's not cool having dozens of files with the same name.

Defining a dataset throws an exception

Hi guys,

First of, thanks for writing the BigStitcher! I've used often already and I'm quite happy with it. Unfortunately, after updating today, I noticed that defining a new dataset from a CZI file (either through the BigStitcher or the MultiviewReconstruction plugin panels) throws the following exception:

(Fiji Is Just) ImageJ 2.0.0-rc-61/1.51n; Java 1.8.0_66 [64-bit]; Windows Server 2012 R2 6.3; 135MB of 392869MB (<1%)

java.lang.NullPointerException
at spim.fiji.spimdata.SpimDataTools.getFilteredViewDescriptions(SpimDataTools.java:50)
at spim.fiji.spimdata.SpimDataTools.getFilteredViewDescriptions(SpimDataTools.java:43)
at spim.fiji.spimdata.explorer.FilteredAndGroupedTableModel.elements(FilteredAndGroupedTableModel.java:155)
at spim.fiji.spimdata.explorer.FilteredAndGroupedTableModel.(FilteredAndGroupedTableModel.java:147)
at spim.fiji.spimdata.explorer.ViewSetupExplorerPanel.initComponent(ViewSetupExplorerPanel.java:154)
at spim.fiji.spimdata.explorer.ViewSetupExplorerPanel.(ViewSetupExplorerPanel.java:118)
at spim.fiji.spimdata.explorer.ViewSetupExplorerPanel.(ViewSetupExplorerPanel.java:147)
at spim.fiji.spimdata.explorer.ViewSetupExplorer.(ViewSetupExplorer.java:18)
at spim.fiji.plugin.Data_Explorer.run(Data_Explorer.java:69)
at ij.IJ.runUserPlugIn(IJ.java:217)
at ij.IJ.runPlugIn(IJ.java:181)
at ij.Executer.runCommand(Executer.java:137)
at ij.Executer.run(Executer.java:66)
at java.lang.Thread.run(Thread.java:745)

Any ideas what might have changed in the latest iteration of the code?

Best,

More Log feedback during long processing steps

As a new user of this great plugin I'd like to suggest to enhance feedback to the user during lengthy processing steps to clarify whether processing is ongoing or complete. For example:

  1. At the end of 'Refine with ICP', print something like "Done". Currently the Log output is limited to the transformation model parameters. The inexperieced user is left wondering whether the command is done and he/she can proceed to the next step.
  2. During image fusion it would be a great improvement to give some feedback of what is going on, again in the Log. Depending on the data being fused this step can take a very long time, during which no progress bar is shown and the Log just says "Approximate pixel size of fused image...". Again, one wonders if the plugin is stuck or if processing is ongoing but will take just minutes, hours, days...
  3. During and at the end of 'Detect interest points' it is unclear if processing is still occurring.

BigStitcher incompatible with other Fiji plugins due to dependency version skew

Currently BigStitcher depends on (and is shipped by its update site together with) SPIM_Registration-6.5.2.jar which breaks plugins like Descriptor-based registration that depend on SPIM_Registration-5.0.16.

This is confusing for users trying out BigStitcher. It should at least be mentioned in the documentation on the wiki.

Also, if you search http://maven.imagej.net for SPIM_Registration, the latest version seems to be 5.0.18-SNAPSHOT, since all the newer development seems to happen on a topic branch only. Are there any plans to change this any time soon?
Also, what are the plans of integrating BigStitcher into a regular Fiji? Is that something that will likely happen within the next months (e.g. at the December Dresden hackathon?), or should we rather keep suggesting people to use a separate Fiji installation if they want to try BigStitcher?

Thanks again for the awesome plugin development! ๐Ÿ˜„

Dialogs look different in batch mode than in assistant

When you want to record macros via Batch Mode the dialogs looks than in the assistant. That's confusing.

Maybe provide a skeleton macro which does the same as the assistant. I might be able to provide a macro soon.

But for me it's not clear if the macro does the same as the assistant.

Fusing using 2D tiles into 3D

This feature request is closely related to where #35 left off. @hoerldavid, thanks again for helping us with these issues and answering my questions.

We are developing a new acquisition plugin at LOCI to reduce the time/memory it takes to do LSM on uneven samples, where a regular 3D grid would be filled with many blank/noisy tiles. Our current workflow involves making a position list for each X, Y, and Z position, scanning them at low resolution/SNR, and deleting positions that have too low signal. We then scan the resulting grid, saving each position into individual 2D tiles.

BigStitcher is capable of reading the positions/metadata correctly, but it interprets the whole acquisition as being 2D and so can only stitch single planes. If you try to stitch all images you only get one plane, but you can manually select images from a single plane to get others out. I have a small example dataset for what we're trying to do.

There are a couple workarounds we can try, but we thought you should know that there is a good use case for this feature.

views as "group" in .czi and the autoloader

Hi David and Stephan,

a user brought me a Zeiss Lightsheet Z.1 multiview dataset were each view is contained in a different group. The rational is to make a lightsheet alignment for each view. Which is not possible when using only one group in Zen Black.

This is deviating from what I usually do in multiview datasets, thus I wondered if the autoloader can handle it.
There is an option to assign the groups to Angles instead of tiles. So I guess it is in principle possible to process that or you guys thought about it and would like to make it available:

screen shot 2017-08-28 at 14 46 57

But when I want to define the .xml I get errors:

screen shot 2017-08-28 at 14 48 43
screen shot 2017-08-28 at 14 49 02

Would it be possible to make that a working feature?

Cheers,
Christopher

PhaseCorrelation feature request for minOverlap: specify maximalShift[ ]

I was wondering whether it would be possible (and make sense) to have an option where to specify the minOverlap parameter in the PhaseCorrelation2.getShift method rather as an

long[] maxShift = new long[numDimensions]

What I am having in mind is to enable the user not to limit the overlap area, but rather the maximal shift in the different dimensions, because, e.g., in some applications one knows that the shift can be much larger in the x direction than in the y direction. In addition, in terms of usability I find it sometimes easier to specify a maximal shift in terms of numbers of pixels in a specific direction, rather than thinking in terms of areas.

Intensity based weights in fusion

Hi All,

We're trying to image very large samples, and so often we have parts of our image stacks that are significantly dimmer than others.

Right now, during fusion, there is a pixel averaging step that leads to ugly dark stripes on our fused volumes. Is there a way, during the fusion step, to give the weights based on intensity? Even something like a max projection would probably work, or ideally some adjustable parameter.

Alternatively, is there an efficient way to normalize large amounts of images from many different stacks?

Thanks for the fantastic plugin!
Alon

reserving to HDF5 does not work on windows 7

Exception in thread "Thread-10" java.lang.NullPointerException
	at mpicbg.spim.data.XmlHelpers.pathElement(XmlHelpers.java:321)
	at mpicbg.spim.data.generic.XmlIoAbstractSpimData.toXml(XmlIoAbstractSpimData.java:183)
	at net.preibisch.mvrecon.fiji.spimdata.XmlIoSpimData2.toXml(XmlIoSpimData2.java:218)
	at net.preibisch.mvrecon.fiji.spimdata.XmlIoSpimData2.toXml(XmlIoSpimData2.java:52)
	at mpicbg.spim.data.generic.XmlIoAbstractSpimData.save(XmlIoAbstractSpimData.java:101)
	at net.preibisch.mvrecon.fiji.spimdata.XmlIoSpimData2.save(XmlIoSpimData2.java:131)
	at net.preibisch.mvrecon.fiji.spimdata.XmlIoSpimData2.save(XmlIoSpimData2.java:52)
	at net.preibisch.stitcher.gui.StitchingExplorerPanel.saveXML(StitchingExplorerPanel.java:1093)
	at net.preibisch.mvrecon.fiji.spimdata.explorer.popup.ResavePopup$MyActionListener$1.run(ResavePopup.java:212)
	at java.lang.Thread.run(Thread.java:748)

Repro: Create a Dataset from TIFs on Windows 7 and resave it to HDF5.

Tiff stack read error

I tried to open a tiff stack in the BigStitcher plugin.
But when I try to scroll through the in the BigDataViewer, I am only able to view the 1st tile. I am not able to scroll through for the further z-section.

Fusion of 2D images is blank

I have recently tried stitching some 2D grids using BigStitcher and I found out that they load into the dataset and can be visualized using the BigDataViewer, but that any fusion of them comes up blank. Images taken in 3D on the same system show up fine in fusions.

I have not been able to find a fusion setting which shows the 2D grid; I am maybe missing something, but if not, this could be a bug.

I have uploaded three separate examples of this bug. All of them are low intensity, so viewing them requires changing the minimum brightness and color to 0 for the BigDataViewer, or adjusting the Window/Level for the fused image.

The "Small 2D and 3D Dataset" folder holds a minimum example of this problem. One 2x2 3D grid and one 2x2 2D grid.

NoSuchElementException with automatic loader when one file has no number

Happens on Windows and Linux.

Dataset: https://cloud.mpi-cbg.de/index.php/s/gfv7HhpXhAQlaTK

Console:

Including file \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh1.tif in dataset.
Including file \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh2.tif in dataset.
Including file \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh3.tif in dataset.
Including file \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh4.tif in dataset.
FluoviewReader initializing \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh.tif
Reading IFDs
Populating metadata
Populating OME metadata
FluoviewReader initializing \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh1.tif
Reading IFDs
Populating metadata
Populating OME metadata
FluoviewReader initializing \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh2.tif
Reading IFDs
Populating metadata
Populating OME metadata
FluoviewReader initializing \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh3.tif
Reading IFDs
Populating metadata
Populating OME metadata
FluoviewReader initializing \\fileserver.mpi-cbg.de\SarovHFSP\2018-07-19_steph\nip2-gfp_his72-mCh_his58_mCh_TRG_1668\Worm5_L3\nip2-gfp_his72-mCh_his58-mCh4.tif
Reading IFDs
Populating metadata
Populating OME metadata
VS: 0
Error: more than one View: ch0 a0 ti0 tp0 i0
VS: 0
Error: more than one View: ch1 a0 ti0 tp0 i0
VS: 0
Error: more than one View: ch2 a0 ti0 tp0 i0
true
[ERROR] Module threw exception
java.util.NoSuchElementException
	at java.util.HashMap$HashIterator.nextNode(HashMap.java:1444)
	at java.util.HashMap$KeyIterator.next(HashMap.java:1466)
	at net.preibisch.mvrecon.fiji.datasetmanager.FileListDatasetDefinition.createDataset(FileListDatasetDefinition.java:952)
	at net.preibisch.mvrecon.fiji.plugin.Define_Multi_View_Dataset.defineDataset(Define_Multi_View_Dataset.java:141)
	at net.preibisch.mvrecon.fiji.plugin.queryXML.LoadParseQueryXML.queryXML(LoadParseQueryXML.java:122)
	at net.preibisch.mvrecon.fiji.plugin.queryXML.LoadParseQueryXML.queryXML(LoadParseQueryXML.java:109)
	at net.preibisch.stitcher.plugin.BigStitcher.run(BigStitcher.java:74)
	at org.scijava.command.CommandModule.run(CommandModule.java:199)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
	at org.scijava.thread.DefaultThreadService$3.call(DefaultThreadService.java:238)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Fiji and Files
win_bug_1_1
proper files found
win_bug_1_2
metadata looks good
win_bug_1_3
and crash
win_bug_1_4

error loading 16-bit grayscale tiles (754 tiles, 3 channels per tile, 3 z-slices per channel)

I have 754 (26*29) tiles in 16-bit, grayscale format.

Tile 557:
Image_XY01_00557_Z001_CH1.tif
Image_XY01_00557_Z001_CH2.tif
Image_XY01_00557_Z001_CH4.tif
Image_XY01_00557_Z002_CH1.tif
Image_XY01_00557_Z002_CH2.tif
Image_XY01_00557_Z002_CH4.tif
Image_XY01_00557_Z003_CH1.tif
Image_XY01_00557_Z003_CH2.tif
Image_XY01_00557_Z003_CH4.tif

Tile 558:
Image_XY01_00558_Z001_CH1.tif
Image_XY01_00558_Z001_CH2.tif
Image_XY01_00558_Z001_CH4.tif
Image_XY01_00558_Z002_CH1.tif
Image_XY01_00558_Z002_CH2.tif
Image_XY01_00558_Z002_CH4.tif
Image_XY01_00558_Z003_CH1.tif
Image_XY01_00558_Z003_CH2.tif
Image_XY01_00558_Z003_CH4.tif

etc. for 754 tiles.

The formula version would be: Image_XY01_{iiiii}_Z{zzz}_CH{c}.tif

iiiii - tilenum
zzz - z-slice num
c - channel num

They are ordered in Snake Format {Right & Down}. I had previously been using the old Grid/Stitch macro in ImageJ which worked fine. What I did do was create a multi-channel image consisting for a given z-stack each channel (*_Multi.tif = *CH1.tif+*CH2.tif+*CH4.tif) then run a stitch for each z-stack. Starting with Z001 I generated TileInfo.registered.txt then fed that into stitching for Z002, Z003, Z004, Z005. I encountered a bug in the code (mainly listed here) and after speaking with Stephan, he suggested BigStitcher which I leads me to my current situation. I can't load the data in.

Since there appears to be no way to set the z-slice, I just set the formula for that to be a time point. See as follows.

image_formula

In Loading Options, I set it to "Load Raw Data" but then when I proceed, I get File X could not be opened: java.lang.ArrayIndexOutOfBoundsException Stopping error.

I've tried a few variations such as just doing a single slice (Z001) and loading the tiles but no luck. Is there something incorrect in my workflow. Is there a formula and setting that allows me to load 16-bit grayscales for each channel at different Z-slices? Thanks in advance and sorry if I missed a memo.

find maxima for 1D phaseCorrelation does not return

Calling the PhaseCorrelation2Util.getPCMMaxima does not return for me when using a 1D PCM input. I guess the issue could be that the FourNeighborhoodExtrema.findMaxMT function is not made for 1D inputs, where there are no 4 neighbours?
https://github.com/PreibischLab/BigStitcher/blob/master/src/main/java/net/imglib2/algorithm/phasecorrelation/PhaseCorrelation2Util.java#L245

Here is the code that I used for testing:
https://github.com/tischi/fiji-plugin-imageRegistration/blob/master/src/main/java/de/embl/cba/registration/tests/TestPhaseCorrelation1D.java#L63

Automation Question

Hi - I am attempting to automate the Big Stitching plugin by getting window handles on each of the screens, setting the values I need, and clicking OK from code. I have been able to do this until I get to the screen titled "Regular Grid Options" which allows the user to choose tile configurations.

What is the class for this window? Is there an existing way I can hook into it to set its values from code?

Thank you.

All Views fusion only uses selected images

I've found some behavior in the fusion code that may be a bug or just a misunderstanding on my part. Selecting a single (or group of) tiles in the Stitching Explorer and then doing an Advanced Fusion using the "All Views" bounding box outputs an image that covers the bounding box volume, but which only has values in the selected tile.

I understand this may be intentional to deal with different channels/illuminations/etc, as in Quick Fusion, but the behavior took me by surprise. If it is intentional, could we move the warning box in the Quick Fusion section of the documentation to the top of the Fusion page and make it clear that it applies to both types of fusion? It may be a good idea to add a warning to the Advanced Fusion GUI as well.

I've attached an image of the setup below. This fusion was done in the March 6th 0.3.1 snapshop of multiview-reconstruction and the 0.3.0 version of BigStitcher.

image

importing Luxendo files

I have some datasets from Luxendo MuVi microscope. They are in the following format:
-each tile and channel is a separate folder with names following the pattern: stack{x}channel{y}
-each folder contains a .h5 file and a .txt file with some metadata
-no .xml file is provided

Is there a way to import this dataset into BigStitcher without conversion to tiff?

File loading - allow tiles to be read from filename by axis subscript

Thank you guys for this very impressive work. It's invaluable.

I'm dealing with a fairly large set of stacks. The current acquisition setup generates a filename with the pattern: ...X3_Y2... where the numbers correspond to a specific tile's subscripts. Currently, when loading the data, only the tile "index" can be used and using both the subscripts at "tiles" results in a loading error.

Would it be possible to implement a {subx} ,{suby}... option in the loading that automatically reads subscripts instead of indexes?

Thanks again!

Recording macros for automating BigStitcher

Hi,
I have been trying to automate BigStitcher as described in the ImageJ wiki: https://imagej.net/BigStitcher_Headless.
However, when I try to record my workflow only the first command for defining the data set gets recorded. No other command (e.g. calculating shift, doing global optimization and saving) gets displayed in the recorder.
Both, BigStitcher and Fiji are up to date.
I would greatly appreciate your help with this problem.

Thanks,
Dominik

Fusion to 2D TIFF series

Hello,

BigStitcher is absolutely amazing and is producing beautiful results with our data. If possible, my one request would be an option to fuse results to a 2D TIFF series rather than multipage TIFF.

Thanks!
Adam

CMD+a creates an exception on Mac

latest updates installed:

Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
	at net.preibisch.stitcher.gui.bdv.BDVFlyThrough.addCurrentViewerTransform(BDVFlyThrough.java:31)
	at net.preibisch.stitcher.gui.StitchingExplorerPanel$12.keyPressed(StitchingExplorerPanel.java:1147)
	at java.awt.AWTEventMulticaster.keyPressed(AWTEventMulticaster.java:250)
	at java.awt.Component.processKeyEvent(Component.java:6497)
	at javax.swing.JComponent.processKeyEvent(JComponent.java:2832)
	at java.awt.Component.processEvent(Component.java:6316)
	at java.awt.Container.processEvent(Container.java:2239)
	at java.awt.Component.dispatchEventImpl(Component.java:4889)
	at java.awt.Container.dispatchEventImpl(Container.java:2297)
	at java.awt.Component.dispatchEvent(Component.java:4711)
	at java.awt.KeyboardFocusManager.redispatchEvent(KeyboardFocusManager.java:1954)
	at java.awt.DefaultKeyboardFocusManager.dispatchKeyEvent(DefaultKeyboardFocusManager.java:835)
	at java.awt.DefaultKeyboardFocusManager.preDispatchKeyEvent(DefaultKeyboardFocusManager.java:1103)
	at java.awt.DefaultKeyboardFocusManager.typeAheadAssertions(DefaultKeyboardFocusManager.java:974)
	at java.awt.DefaultKeyboardFocusManager.dispatchEvent(DefaultKeyboardFocusManager.java:800)
	at java.awt.Component.dispatchEventImpl(Component.java:4760)
	at java.awt.Container.dispatchEventImpl(Container.java:2297)
	at java.awt.Window.dispatchEventImpl(Window.java:2746)
	at java.awt.Component.dispatchEvent(Component.java:4711)
	at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:760)
	at java.awt.EventQueue.access$500(EventQueue.java:97)
	at java.awt.EventQueue$3.run(EventQueue.java:709)
	at java.awt.EventQueue$3.run(EventQueue.java:703)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
	at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:84)
	at java.awt.EventQueue$4.run(EventQueue.java:733)
	at java.awt.EventQueue$4.run(EventQueue.java:731)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
	at java.awt.EventQueue.dispatchEvent(EventQueue.java:730)
	at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205)
	at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
	at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
	at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
	at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
	at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)

repro: create a new dataset with BS on MacOS 10.11 and simply press CMD+A

loading Micromanager2 data

I acquire images using Micromanager2. The data format is 1 channel, many tiles, filenames in name_Pos**.ome.tif format. There are two issues:

  1. minor: for correct loading, one has to choose Series=Tiles, Filename pattern=ignore, which is confusing
  2. less minor: autoloader reads metadata wrongly. In my data, X is decreasing and this is not recognised, see screenshots - first is from Autoloader, second is a result of defining grid manually. In this dataset, defining the grid manually is not such a big deal, but in general, this means that you have to remember whether it was 3x4 or maybe 4x3 tiles, how much overlap etc.
    I can provide this dataset via Google Drive or something if this helps.
    2_pattern
    3_metadata
    4_manual

Problem turning on RegistrationExplorer

If I turn on RegistrationExplorer, then switch to Stitching mode and back to Multiview mode, I cannot turn on Registration Explorer again. There are no error messages--it just doesn't launch. Closing and re-opening BigStitcher does not help. I have to close and re-open ImageJ before I can re-launch RegistrationExplorer. I am able to turn RegistrationExplorer on and off repeatedly if I do so without leaving Multiview mode.

Not parsing channels

The latest version of BigStitcher has stopped parsing my imaging channels, although it still parses angles. It reads everything as one channel. My data were acquired with MicroManager in Fiji.

Calculate Pairwise Shifts Crash when Canceled

This problem occurs when I try to cancel out of "Calculate Pairwise Shifts"
https://www.dropbox.com/s/r35a574a8mn0q2y/cancel_stitching.tif?dl=0

Also, I am wondering two things:

  1. Is there an "Ignore Zeros" option in BigSticher?
  2. How can I simply obtain the equivalent of TileConfiguration.txt from BigStitcher? All I want is the registered coordinates of the tiles. Instead it generates a very complex XML, and the number of and entries is much greater than the number of tiles. So we don't know which ones to use....
    Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.