educelab / volume-cartographer Goto Github PK
View Code? Open in Web Editor NEWVolumetric processing toolkit and C++ libraries for the recovery and restoration of damaged cultural materials
License: GNU General Public License v3.0
Volumetric processing toolkit and C++ libraries for the recovery and restoration of damaged cultural materials
License: GNU General Public License v3.0
Layer generation with vc_layers
or vc_layers_from_ppm
uses the LineGenerator
class to compute a vector of intensities for each pixel in the PPM, then saves each slice along those accumulated vectors as a separate image. Assumably, one could reconstruct the original 3D coordinates corresponding to each voxel in the layer stack like so:
pt = ppm[y, x, :3]
n = ppm[y, x, 3:6]
offset = layer_idx - num_layers // 2
pos = pt + offset * n
However, this calculation is only accurate if the radius value for the neighborhood is an integer value. Consider layer stacks generated with a radius of 1.2 and 1.5 (assumed bidirectional sampling):
interval = 1.
for r in [1.2, 1.5]:
# floor(r - -r / interval) + 1
num_layers = floor(r + (r / interval)) + 1
# offsets = min_r + idx * interval
print([-r + idx * interval for idx in range(num_layers)]) # [-1.2, -0.2, 0.8]
# [-1.5, -0.5, 0.5, 1.5]
With these values, the algorithm above for reconstructing the original position would be incorrect voxel:
for num_layers in [3, 4]:
print([i - num_layers//2 for i in range(num_layers)]) # [-1, 0, 1]
# [-2, -1, 0, 1]
The reconstruction code above will overestimate radius values with fractional part
Unfortunately, I don't think the 3D positions in a layer stack can be exactly reconstructed without also recording the radius value used when generating the layers. As this is sometimes auto-determined from the voxel size and estimated layer thickness, it's not guaranteed to be an integer value.
I'm not sure that this is exactly a bug, per se, but something that should be documented and improved. I do think it would be helpful if the surface voxel was always exactly recorded when doing bidirectional sampling, but that would likely mean adjusting the requested radius in order to maintain a uniform sampling interval. Maybe that's fine? That seems fraught when the sampling interval is also not an integer, though.
Anyway, I'm willing to take thoughts or suggestions on this topic. Since the error is so small and easily solved with some bookkeeping, I don't think this is a critical issue, and we have some time to think about implications.
Pull request #62 removed the ability to import multiple volumes with a single vc_packager command, so that it could change the volume information inputs from stdin to program arguments.
Reimplement this functionality using program arguments.
No response
No response
I've followed tutorial3 in Vesuvius Challenge. However, when I use the segmentation tool in the following step
the application breaks, here's a short film
.
.
Not sure if it's relevant, but this is how I installed it 👉 #5
And my device information:
macOS: Ventura 13.2.1
CPU: 2.3 GHz dual Intel Core i5
GPU: Intel Iris Plus Graphics 640
btw, thanks to the people behind the scenes and their difficult progress in the past to achieve this challenge 🙏
Can't wait to see what happen in the next few years 🚀
Need to be able zoom out further for big slices.
I recommend updating this function as well.
added bonus:
-Compatible with backwards segmentation
-Not deleting segmentation process above current segmentation job
-Displaying target segmentation layer when finishing the segmentation job.
void CWindow::onSegmentationFinished(Segmenter::PointSet ps)
{
setWidgetsEnabled(true);
worker_progress_updater_.stop();
worker_progress_.close();
// 3) concatenate the two parts to form the complete point cloud
// find starting location in fMasterCloud
int i;
for (i= 0; i < fMasterCloud.height(); i++) {
auto masterRowI = fMasterCloud.getRow(i);
if (ps[0][2] <= masterRowI[fUpperPart.width()-1][2]){
break;
}
}
// remove the duplicated point and ps in their stead. if i at the end, no duplicated point, just append
fUpperPart = fMasterCloud.copyRows(0, i);
fUpperPart.append(ps);
// check if remaining rows already exist in fMasterCloud behind ps
for(; i < fMasterCloud.height(); i++) {
auto masterRowI = fMasterCloud.getRow(i);
if (ps[ps.size() - 1][2] < masterRowI[fUpperPart.width()-1][2]) {
break;
}
}
// add the remaining rows
if (i < fMasterCloud.height()) {
fUpperPart.append(fMasterCloud.copyRows(i, fMasterCloud.height()));
}
fMasterCloud = fUpperPart;
statusBar->showMessage(tr("Segmentation complete"));
fVpkgChanged = true;
// set display to target layer
fPathOnSliceIndex = fSegParams.targetIndex;
CleanupSegmentation();
UpdateView();
}
Originally posted by @schillij95 in #25 (comment)
I was testing some code to read ppm files. This code would read the ppm file, use the xyz information in the file to create a mesh in xyz space, and use the normal information in the file to create lines in xyz space extending from the mesh. After hiding most of the mesh and most of the normals (in order to create a picture I could make sense of), I got this:
I noticed there were a number of discontinuities, which seemed to be located at the boundaries of the triangles from the original obj mesh that was used to create the ppm file.
I dug into the VC code, and in PPMGenerator.cpp came across the PhongNormal function, which contains the following lines:
return cv::normalize(
(1 - nUVW[0] - nUVW[1]) * nA + nUVW[1] * nB + nUVW[2] * nC);
This formula seems odd to me; I would have expected:
nUVW[0] * nA + nUVW[1] * nB + nUVW[2] * nC);
I'm not currently in a position to compile and run the VC code, so I cannot test whether my version would solve the discontinuous-normal problem. But in any case, I believe the discontinuous normals could be the cause of the "sharkbite" problem that was previously reported (bug #33).
No response
No response
None
No response
PPMs are often generated from surfaces which have associated volume ID's (i.e. from VC Segmentations as opposed to some random input mesh). However, this information is lost in the PPM encoding. As a consequence, the vc_*_from_ppm
utilities generally require the --volume
flag in order to function correctly.
Add associated volume metadata into the PPM header. The setter and getter will be easy to add to the PerPixelMap
class. Changing the PointSetIO
interface to support extra metadata fields in the header is more complicated, but not impossible (maybe something with std::optional
or std::map
or whatever). Updating usages will also require some attention to detail. But generally fairly easy.
We could make the --volume
flag required rather than defaulting to the first one. But that makes single volume .volpkg
more annoying to work with and ultimately just makes the user work harder.
No response
It appears that GCC 13 started enforcing stricter adherence to the C++ standard and we now must include cstdint
in order to have access to the builtin types. VC suffers from this issue in both its own codebase
In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/core/Version.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:19:35: error: ‘uint32_t’ does not name a type
19 | static auto VersionMajor() -> uint32_t;
| ^~~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:6:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
5 | #include <string>
+++ |+#include <cstdint>
as well as in its smgl in-source dependency.
In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/src/Uuid.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:20:18: error: ‘uint8_t’ does not name a type
20 | using Byte = uint8_t;
| ^~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:7:1: note: ‘uint8_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
6 | #include <string>
+++ |+#include <cstdint>
7 |
HEAD
Built from source
In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/core/Version.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:19:35: error: ‘uint32_t’ does not name a type
19 | static auto VersionMajor() -> uint32_t;
| ^~~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/core/include/vc/core/Version.hpp:6:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
5 | #include <string>
+++ |+#include <cstdint>
In file included from /run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/src/Uuid.cpp:1:
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:20:18: error: ‘uint8_t’ does not name a type
20 | using Byte = uint8_t;
| ^~~~~~~
/run/media/xxxx/work/vesuvius/volume-cartographer/build/_deps/smgl-src/smgl/include/smgl/Uuid.hpp:7:1: note: ‘uint8_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
6 | #include <string>
+++ |+#include <cstdint>
7 |
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
I tried to render an 88 keV Scroll 2 segment. This is the segmenter team's first time rendering 88 keV scroll data. The output by vc_render looks as expected.
But the layers (0.tif-64.tif) appear masked in the wrong dimension.
Note: when copying the images here, I rotated each 90 degrees clockwise.
These images were generated using the following command: export SLICE=20230801194757 && cd /Scroll2.volpkg/paths/${SLICE} && nice vc_convert_pointset -i pointset.vcps -o "${SLICE}_points.obj" && nice vc_render --mesh-resample-smoothing 3 -v ../../ -s "${SLICE}" -o "${SLICE}.obj" --output-ppm "${SLICE}.ppm" && mkdir -p layers && nice vc_layers_from_ppm -v ../../ -p "${SLICE}.ppm" --output-dir layers/ -r 32 -f tif --cache-memory-limit 50G && vc_area ../.. ${SLICE} | grep cm | awk '{print $2}' | tee area_cm2.txt && echo 'david' > author.txt
volume-cartographer 2.25.0 (Untracked)
Docker
[2023-08-10 20:00:16.260] [volcart] [info] Loading file...
[2023-08-10 20:00:16.298] [volcart] [info] Loaded PointSet with 849849 points
[2023-08-10 20:00:16.359] [volcart] [info] Writing to OBJ...
[2023-08-10 20:00:18.136] [volcart] [info] File written: 20230801194757_points.obj
[2023-08-10 20:00:18.225] [volcart] [info] Loading VolumePkg: Scroll2.volpkg
[2023-08-10 20:00:18.229] [volcart] [info] Created new Render graph in VolPkg: 20230810200018
[2023-08-10 20:00:32.409] [volcart] [info] ACVD: Input: 849849 verts, 1696000 faces
[2023-08-10 20:00:33.614] [volcart] [info] ACVD: Performing isotropic mesh resampling...
[2023-08-10 20:00:37.736] [volcart] [info] ACVD: Computing quadrics optimization...
[2023-08-10 20:00:38.190] [volcart] [info] ACVD: Output: 16211 verts, 31822 faces
[2023-08-10 20:00:38.465] [volcart] [info] Solving ABF++
[2023-08-10 20:00:43.526] [volcart] [info] ABF++ Iterations: 3 || Final norm: 0.00049774
[2023-08-10 20:00:43.526] [volcart] [info] Solving LSCM
[2023-08-10 20:00:44.356] [volcart] [info] L2 Norm: 1.0005, LInf Norm: 1.4752
Requested to load slice... (1-1000)
Volume Cache :: Capacity: 221 || Size: 50GB
Loading PPM...
Generating layers...
Requested to load slice... (1-1000)
Writing layers...
3.14556
Looks like a meshing bug maybe - there are visual artefacts throughout every segment we have done so far, which will prevent the correct training of ink detection algorithms. Image posted in the discord.
This has been isolated as an artefact of moving the segmentation line too far on a given segment.
This still needs to be carefully debugged/understood, so we can prevent it from happening.
There is a smoothing function to mitigate this to some extent, but we need to know what it is doing - Laplacian will damage our data.
All segmentations.
all versions
both source and docker builds for segmentation, only docker builds for running vc_render
No response
Some segments made with VC fail to render, without a clear way to fix them. There are a couple different error messages that appear, and I will attempt to collate them here, however the most common is:
[volcart] [error] Resolved edge pair already paired. Edge (17246, 17214) is not 2-manifold.
It is unclear what exactly produces the errors, we think it may be one dot on top of another.
any version
None
[volcart] [error] Resolved edge pair already paired. Edge (17246, 17214) is not 2-manifold.
When running vc-packager my very first time as instructed in the segmentation tutorial against the campfire zip file, I could not get it to work. I would consistently get an error stating that:
Found 477 files which did not match the initial slice:3s<00m:00s] 954/954
<LONG LIST OF FILES HERE>
ERROR: Slices in slice directory do not have matching properties (width/height/depth)
After a lot of banging my head against the wall I realized it was because MacOS on a non-HFS formatted drive (such as the ExFAT external disk I was using or a samba share) writes a metadata file for each actual file it performs IO on. For example for the file /campfire/rec/0168.tif
that was extracted from the zip file, there was a corresponding metadata file of /campfire/rec/._0168.tif`.
It seems the vc-packager tries to parse these just as it would other tif files (and understandably so, though this is less than desirable).
vc_packager -s ./campfire/rec -v ./campfire.volpkg -u 104 -n campfire
2.26.0-rc.3 (according to homebrew as vc_version doesn't exist in my install)
Homebrew
No response
I've followed tutorial3 in Vesuvius Challenge and installed Volume Cartographer with the following command on Mac
brew install --no-quarantine educelab/casks/volume-cartographer
This command run successfully and I can see Volume Cartographer appear in my application folder. However, when I try to run
vc_render --help
The following error occur:
dyld[4064]: Library not loaded: /usr/local/opt/libtiff/lib/libtiff.5.dylib
Referenced from: <9298E93C-EB87-3AFD-9366-F05D82C15B87> /usr/local/Caskroom/volume-cartographer/2.24.0/bin/vc_render
Reason: tried: '/usr/local/opt/libtiff/lib/libtiff.5.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/opt/libtiff/lib/libtiff.5.dylib' (no such file), '/usr/local/opt/libtiff/lib/libtiff.5.dylib' (no such file), '/usr/local/lib/libtiff.5.dylib' (no such file), '/usr/lib/libtiff.5.dylib' (no such file, not in dyld cache)
[1] 4064 abort vc_render --help
Seems I need to install libtiff to run this app, so I install it manually and after that vc_render --help
command works now.
brew install libtiff
Not sure if it's an issue. If not or I'm doing something wrong, please let me know and I will close it later, thanks!
Since the volume is lazy loaded at runtime, missing slices can cause a number of unexpected crashes across our various programs. Need to review the current behavior on image load and its effect on the apps. Might also want to add a verification util or something.
Originally posted by csparker247 October 31, 2023
We would like to auto-orient the UV map with respect to at least the volume (i.e. no frame of reference except the volume shape), but also landmarks in the volume (e.g. top, bottom, front, back, etc.).
The appropriate place to fix UV map auto-orientation for general objects (i.e. without any frame of reference) is in the FlatteningAlgorithm::orient_uvs_
method so that it's available to all subclasses. That's the base class for all UV algorithms, and all subclasses would need to call it inside their own compute()
functions. The current implementation (which tried to align the z-axis to the v-axis) is not used because it was pretty unreliable in practice. At the moment, the current UV Map orientation is determined by the two pinned edges in the OpenABF LSCM code, which is functionally arbitrary.
Auto-orientation of the UV map with respect to landmarks should be implemented as a separate function/class that takes a UV map and landmarks as input and outputs an updated UV map. We really want to auto-orient the text in a readable direction. Once a single piece of text is found and oriented, all other text should be auto-orientable as a result.
#51 fixes a bug in PointSetIO
that checks for file write failure. We need to review the other file writing classes to make sure they do the same checks.
It's inconvenient to keep a list of vc_render params for each separate segment, to rotate / flip the segment into an orientation in which text is readable.
It would be nice if you can specify params in the JSON file for each segment, and have vc_render use those params as a default. That way you can easily run vc_render on all segments (e.g. for f in paths/; do vc_render …
) and have all the outputs be oriented properly.
Keeping a separate list of flags for vc_render for each segment. Or a separate file that contains the flags or so.
Not super high priority, just something nice to consider.
One last low priority thing would be to make the enter key work as 'start' in the Ending Slice box, the way enter works in the choose slice box.
Originally posted by @hariseldon137 in #28 (comment)
Related to the above request, the behavior of tabbing through the segmentation options can be improved. For example, hitting tab while focused on the Ending Slice entry box does not transfer focus to the Start button.
There are some leftover requests from #14 that haven't been addressed yet. These didn't really make sense with the keyboard shortcuts, so moving them here:
Scrolling in the image area is non-trivial to fix as there's default behavior defined by Qt/the OS that we have to consider. For example, I use an Apple Magic Mouse with an XY track pad that has no difficulty scrolling in the image area as is. We want to support the above while not breaking these default behaviors.
From Hari_Seldon:
LR arrows for previous/next (shift modifier for x10)
+,- for zoom in/out (no ctrl)
[,] for decrease/increase impact range
remap alt to ctrl for scrolling X axis
a/d for previous/next (shift modifier for jumping by 10)
w/s for zoom in/out
q/e for decrease/increase impact range
ctrl + mouse for X-axis panning
vc_render
non-deterministically segfaults on Apple Silicon devices. To consistently reproduce, try to default texture any segmentation:
vc_render -i Testing.volpkg -s local-reslice-particle-sim
My debugging thus far shows that this command segfaults in graph::CalculateNumVertsNode::compute()
because the mesh_
pointer is NULL
. I haven't determined why this is.
The error does not occur in Debug
builds (at least I haven't had it happen yet), but does occur in Release
and RelWithDebInfo
builds. I have not been able to reproduce the issue at all on Apple Intel devices.
If you encounter this issue, try running your command again as the issue seems to be non-deterministic. If you find a command that consistently reproduces this issue, please let us know.
Add a settings window that allows the user to setup their own keyboard shortcuts.
Follow up to #14.
Users are reporting that VC ordered point sets have duplicate rows for a given slice number.
First thought is that this is caused by pointset merging, after editing, but it could be caused by the algorithms themselves.
vc_area
currently only supports segmentations stored inside a .volpkg
file. However, we mostly want to know the surface area of the meshes that are output by vc_render
.
vc_area
should be extended to add support for mesh files. This should be a fairly trivial addition. We could also consider dropping support for segmentations entirely, since measuring their surface area is highly dependent on the meshing and we currently don't allow any control over meshing in vc_area
.
No response
No response
There are a couple of ways that I would like Docker workflow triggers to be improved:
build-docker
label is added to a PR, build the Docker image for the current branch, upload it (maybe? just want it for testing), and remove the label.A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.