Comments (5)
I think the problem is that your dataset is pretty large. In ICP, distances between every pair of points need to be computed, i.e. you end up with tensors with O(N ^ 2) elements. For your case, this is larger than 2 ^ 31 and hence might lead to overflow issues when indexing those tensors. I had this problem before, and, unfortunately, it seems to be a problem inherent to the framework, so there is nothing I can do to fix it for this package.
But what you can do is to just subsample your arrays, e.g. only take every 10th. This should be sufficient to align the datasets and will be faster to run as well.
from icp.
Hello @raphael2692
without having access to the data it is hard to judge what the reason for the nans is. If you are dealing with a mesh, your might have to reshape the data such that the first dimension represents the number of datapoints and the second the features, e.g. x, y, and z coordinates. So your arrays must have the shape N x 3.
You should also check whether the example notebook is working in your environment to make sure it is not a dependency issue.
from icp.
The example runs fine. The input shape is the following:
template_mesh = pv.PolyData("template_norm.obj")
print(template_mesh.points.shape)
print(template_mesh.points)
>>>(28672, 3)
>>>pyvista_ndarray([[ 435.51758, 68.87077, 492.01025],
[ 453.29355, 75.78145, 492.64285],
[ 449.05862, 71.8935 , 490.66092],
...,
[ 601.823 , 1450.805 , 335.7489 ],
[ 557.8587 , 1453.4105 , 323.3078 ],
[1094.9977 , 1333.2723 , 105.14885]], dtype=float32)
The same goes for the other input, scene_norm.obj.
from icp.
Just in case you want to know, I sampled down the mesh using poisson-disk down to 1000 vertices but i still got the nans. I managed to get a result normalizing the input arrays via sklearn, although I am not sure how to interpret the results (the real scale difference is 4 times up from scene to template).
>>>Scale: [0.99877591 0.99877591 0.99877591]
>>>Offset: [0.00057436 0.00074789 0.00100556]
from icp.
I see, so maybe you the high variance in your data caused numerical instabilities. Thank you, I might incorporate the normalization in a future version of the package.
If you normalize both datasets independently, the scale difference might disappear automatically. You have to scale both datasets with the same mean and standard deviation.
from icp.
Related Issues (3)
- Resource Exhausted Error HOT 1
- Is this differentiable? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from icp.