Giter Club home page Giter Club logo

Comments (11)

ErlerPhilipp avatar ErlerPhilipp commented on June 12, 2024 1

Actually, I think we can skip the transform node altogether for now. The important part of the editor is to specify a bounding box for the reconstruction. Everything else is not strictly necessary.
A transform for the final output would be nice, so that the users can e.g. scale the result to meters and center it on a specific part. But that's already optional.

from pix2model.

ErlerPhilipp avatar ErlerPhilipp commented on June 12, 2024

Examples for calls:
Camera Init
aliceVision_cameraInit --sensorDatabase "C:\Users\pherl\Desktop\Meshroom-2023.2.0\aliceVision\share\aliceVision\cameraSensors.db" --lensCorrectionProfileInfo "${ALICEVISION_LENS_PROFILE_INFO}" --lensCorrectionProfileSearchIgnoreCameraModel True --defaultFieldOfView 45.0 --groupCameraFallback folder --allowedCameraModels pinhole,radial1,radial3,brown,fisheye4,fisheye1,3deanamorphic4,3deradial4,3declassicld --rawColorInterpretation LibRawWhiteBalancing --viewIdMethod metadata --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/CameraInit/b90727abc0f92c40c074360d60d8f9fe21728f5c/cameraInit.sfm" --allowSingleView 1 --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/CameraInit/b90727abc0f92c40c074360d60d8f9fe21728f5c/viewpoints.sfm"

Feature Extraction
aliceVision_featureExtraction --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/CameraInit/b90727abc0f92c40c074360d60d8f9fe21728f5c/cameraInit.sfm" --masksFolder "" --describerTypes dspsift --describerPreset normal --describerQuality normal --contrastFiltering GridSort --gridFiltering True --workingColorSpace sRGB --forceCpuExtraction True --maxThreads 0 --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/FeatureExtraction/7bb71d547c48f1543cdb1cc05fe6d5908fa1d5e2" --rangeStart 0 --rangeSize 40

Image Matching
aliceVision_imageMatching --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/CameraInit/b90727abc0f92c40c074360d60d8f9fe21728f5c/cameraInit.sfm" --featuresFolders "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/FeatureExtraction/7bb71d547c48f1543cdb1cc05fe6d5908fa1d5e2" --method SequentialAndVocabularyTree --tree "C:\Users\pherl\Desktop\Meshroom-2023.2.0\aliceVision\share\aliceVision\vlfeat_K80L3.SIFT.tree" --weights "" --minNbImages 0 --maxDescriptors 500 --nbMatches 40 --nbNeighbors 5 --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/ImageMatching/065c74f91d227855884634da377e3e3826ca767a/imageMatches.txt"

Feature Matching
aliceVision_featureMatching --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/CameraInit/b90727abc0f92c40c074360d60d8f9fe21728f5c/cameraInit.sfm" --featuresFolders "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/FeatureExtraction/7bb71d547c48f1543cdb1cc05fe6d5908fa1d5e2" --imagePairsList "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/ImageMatching/065c74f91d227855884634da377e3e3826ca767a/imageMatches.txt" --describerTypes dspsift --photometricMatchingMethod ANN_L2 --geometricEstimator acransac --geometricFilterType fundamental_matrix --distanceRatio 0.8 --maxIteration 2048 --geometricError 0.0 --knownPosesGeometricErrorMax 5.0 --minRequired2DMotion -1.0 --maxMatches 0 --savePutativeMatches False --crossMatching False --guidedMatching False --matchFromKnownCameraPoses False --exportDebugFiles False --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/FeatureMatching/a768effb45b2427e6223536e39a8fe5b34b77ec1" --rangeStart 0 --rangeSize 20

Structure from Motion
aliceVision_incrementalSfM --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/CameraInit/b90727abc0f92c40c074360d60d8f9fe21728f5c/cameraInit.sfm" --featuresFolders "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/FeatureExtraction/7bb71d547c48f1543cdb1cc05fe6d5908fa1d5e2" --matchesFolders "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/FeatureMatching/a768effb45b2427e6223536e39a8fe5b34b77ec1" --describerTypes dspsift --localizerEstimator acransac --observationConstraint Scale --localizerEstimatorMaxIterations 4096 --localizerEstimatorError 0.0 --lockScenePreviouslyReconstructed False --useLocalBA True --localBAGraphDistance 1 --nbFirstUnstableCameras 30 --maxImagesPerGroup 30 --bundleAdjustmentMaxOutliers 50 --maxNumberOfMatches 0 --minNumberOfMatches 0 --minInputTrackLength 2 --minNumberOfObservationsForTriangulation 2 --minAngleForTriangulation 3.0 --minAngleForLandmark 2.0 --maxReprojectionError 4.0 --minAngleInitialPair 5.0 --maxAngleInitialPair 40.0 --useOnlyMatchesFromInputFolder False --useRigConstraint True --rigMinNbCamerasForCalibration 20 --lockAllIntrinsics False --minNbCamerasToRefinePrincipalPoint 3 --filterTrackForks False --computeStructureColor True --initialPairA "" --initialPairB "" --interFileExtension .abc --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/StructureFromMotion/79d690639eb4ec5db3c82d212117b127b6432147/sfm.abc" --outputViewsAndPoses "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/StructureFromMotion/79d690639eb4ec5db3c82d212117b127b6432147/cameras.sfm" --extraInfoFolder "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/StructureFromMotion/79d690639eb4ec5db3c82d212117b127b6432147"

SfMTransform
aliceVision_sfmTransform --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/StructureFromMotion/79d690639eb4ec5db3c82d212117b127b6432147/sfm.abc" --method manual --manualTransform 0.0,0.0,0.0,0.0,0.0,0.0,1.0 --landmarksDescriberTypes sift,dspsift,akaze --scale 1.0 --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/sfm.abc" --outputViewsAndPoses "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/cameras.sfm"

Prepare Dense Scene
aliceVision_prepareDenseScene --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/sfm.abc" --outputFileType exr --saveMetadata True --saveMatricesTxtFiles False --evCorrection False --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/PrepareDenseScene/61b60df8ca44abab29cdae4493b249bbbc36c54f" --rangeStart 0 --rangeSize 40

ExportColoredPointCloud
aliceVision_exportColoredPointCloud --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/sfm.abc" --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/ExportColoredPointCloud/9196ffaa386983c0dff76b765e7461e7259d6f9f/pointCloud.abc"

ConvertSfMFormat
aliceVision_convertSfMFormat --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/sfm.abc" --describerTypes dspsift --views True --intrinsics True --extrinsics True --structure True --observations True --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/ConvertSfMFormat/e09417617ad2cae5fb7217876efb4993908a465f/sfm.ply"

Depth Map
aliceVision_depthMapEstimation --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/sfm.abc" --imagesFolder "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/PrepareDenseScene/61b60df8ca44abab29cdae4493b249bbbc36c54f" --downscale 4 --minViewAngle 2.0 --maxViewAngle 70.0 --tileBufferWidth 1024 --tileBufferHeight 1024 --tilePadding 64 --autoAdjustSmallImage True --chooseTCamsPerTile True --maxTCams 10 --sgmScale 2 --sgmStepXY 2 --sgmStepZ -1 --sgmMaxTCamsPerTile 4 --sgmWSH 4 --sgmUseSfmSeeds True --sgmSeedsRangeInflate 0.2 --sgmDepthThicknessInflate 0.0 --sgmMaxSimilarity 1.0 --sgmGammaC 5.5 --sgmGammaP 8.0 --sgmP1 10.0 --sgmP2Weighting 100.0 --sgmMaxDepths 1500 --sgmFilteringAxes "YX" --sgmDepthListPerTile True --sgmUseConsistentScale False --refineEnabled True --refineScale 1 --refineStepXY 1 --refineMaxTCamsPerTile 4 --refineSubsampling 10 --refineHalfNbDepths 15 --refineWSH 3 --refineSigma 15.0 --refineGammaC 15.5 --refineGammaP 8.0 --refineInterpolateMiddleDepth False --refineUseConsistentScale False --colorOptimizationEnabled True --colorOptimizationNbIterations 100 --sgmUseCustomPatchPattern False --refineUseCustomPatchPattern False --exportIntermediateDepthSimMaps False --exportIntermediateNormalMaps False --exportIntermediateVolumes False --exportIntermediateCrossVolumes False --exportIntermediateTopographicCutVolumes False --exportIntermediateVolume9pCsv False --exportTilePattern False --nbGPUs 0 --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/DepthMap/e9b865d6643dd0ef5972d8c3494b1874d0a91547" --rangeStart 0 --rangeSize 3

Depth Map Filter
aliceVision_depthMapFiltering --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/sfm.abc" --depthMapsFolder "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/DepthMap/e9b865d6643dd0ef5972d8c3494b1874d0a91547" --minViewAngle 2.0 --maxViewAngle 70.0 --nNearestCams 10 --minNumOfConsistentCams 3 --minNumOfConsistentCamsWithLowSimilarity 4 --pixToleranceFactor 2.0 --pixSizeBall 0 --pixSizeBallWithLowSimilarity 0 --computeNormalMaps False --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/DepthMapFilter/fad1cb48e702e4fa2c92cf297b7618f58a23a356" --rangeStart 0 --rangeSize 10

Meshing
aliceVision_meshing --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/SfMTransform/9879d91b1963063be49cef1e1cd8c7183890b06e/sfm.abc" --depthMapsFolder "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/DepthMapFilter/fad1cb48e702e4fa2c92cf297b7618f58a23a356" --estimateSpaceFromSfM True --estimateSpaceMinObservations 3 --estimateSpaceMinObservationAngle 10.0 --maxInputPoints 50000000 --maxPoints 5000000 --maxPointsPerVoxel 1000000 --minStep 2 --partitioning singleBlock --repartition multiResolution --angleFactor 15.0 --simFactor 15.0 --pixSizeMarginInitCoef 2.0 --pixSizeMarginFinalCoef 4.0 --voteMarginFactor 4.0 --contributeMarginFactor 2.0 --simGaussianSizeInit 10.0 --simGaussianSize 10.0 --minAngleThreshold 1.0 --refineFuse True --helperPointsGridSize 10 --nPixelSizeBehind 4.0 --fullWeight 1.0 --voteFilteringForWeaklySupportedSurfaces True --addLandmarksToTheDensePointCloud False --invertTetrahedronBasedOnNeighborsNbIterations 10 --minSolidAngleRatio 0.2 --nbSolidAngleFilteringIterations 2 --colorizeOutput False --maxNbConnectedHelperPoints 50 --saveRawDensePointCloud False --exportDebugTetrahedralization False --seed 0 --verboseLevel info --outputMesh "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/Meshing/405e88218b3f4ac0b78047c8950c51f7d53755f9/mesh.obj" --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/Meshing/405e88218b3f4ac0b78047c8950c51f7d53755f9/densePointCloud.abc"

Mesh Filtering
aliceVision_meshFiltering --inputMesh "e:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/Meshing/405e88218b3f4ac0b78047c8950c51f7d53755f9/mesh.obj" --keepLargestMeshOnly False --smoothingSubset all --smoothingBoundariesNeighbours 0 --smoothingIterations 5 --smoothingLambda 1.0 --filteringSubset all --filteringIterations 1 --filterLargeTrianglesFactor 60.0 --filterTrianglesRatio 0.0 --verboseLevel info --outputMesh "e:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/MeshFiltering/c6b9c2e5ebed34188f4b60c4a3843fc7259fada8/mesh.obj"

MeshDecimate
aliceVision_meshDecimate --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/MeshFiltering/c6b9c2e5ebed34188f4b60c4a3843fc7259fada8/mesh.obj" --simplificationFactor 0.5 --nbVertices 0 --minVertices 0 --maxVertices 100000 --flipNormals False --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/MeshDecimate/5f47d58c696da73476793955bacf1dbf587660ba/mesh.obj"

Texturing
aliceVision_texturing --input "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/Meshing/405e88218b3f4ac0b78047c8950c51f7d53755f9/densePointCloud.abc" --imagesFolder "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/PrepareDenseScene/61b60df8ca44abab29cdae4493b249bbbc36c54f" --inputMesh "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/MeshDecimate/5f47d58c696da73476793955bacf1dbf587660ba/mesh.obj" --inputRefMesh "" --textureSide 2048 --downscale 8 --outputMeshFileType obj --colorMappingFileType jpg --unwrapMethod Basic --useUDIM True --fillHoles False --padding 5 --multiBandDownscale 4 --multiBandNbContrib 5 4 3 2 --useScore True --bestScoreThreshold 0.1 --angleHardThreshold 90.0 --workingColorSpace sRGB --outputColorSpace AUTO --correctEV False --forceVisibleByAllVertices False --flipNormals False --visibilityRemappingMethod PullPush --subdivisionTargetRatio 0.8 --verboseLevel info --output "E:/GoogleDrive/Pix2Model/elephants meshroom/MeshroomCache/Texturing/cece8ca9f920246842eb74fb65d1e09427964384"

Image

See elephants test scene for details.

from pix2model.

joschi1212 avatar joschi1212 commented on June 12, 2024

Instead of using Alice Vision library directly we can use the Meshroom command line feature instead:
https://meshroom-manual.readthedocs.io/en/latest/feature-documentation/cmd/photogrammetry.html

Configure a pipeline in Meshroom and save it as e.g. default_pipeline.mg.

Official site

If you use the application from the official site (https://alicevision.org/#meshroom) the executable is inside the root folder.
A simple shell script to start the pipeline looks like this:

#!/bin/sh

TIME=$(date +%s)

/home/joschi/Downloads/Meshroom-2023.2.0-av3.1.0-centos7-cuda11.3.1/meshroom_batch -i ../meshroom_test/test_images -p ./default_pipeline.mg --cache /home/joschi/Downloads/meshroom_test/$TIME

Use an absolute path for the cache folder, otherwise it messes things up. The pipeline should start and run trough without troubles. The output of each node is in the corresponding cache folder.

Inside Docker

If you want to start it from inside docker with this dockerfile 92e8449 you need to build the image with sudo docker build -t meshroom -f Dockerfile.meshroom . from inside the Pix2Model root folder.

And then run the container with docker run -it --runtime=nvidia meshroom. Test your setup by running the commands nvcc --version and nvidia-smi from inside the container. If the output looks good you did a great job!

The executable is in /opt/Meshroom_bundle.

Upload a folder with your images and pipeline file to the container, e.g. /opt/Test_data.
meshroom_batch searches for a config file (config.ocio) in the wrong place, rename the folder AliceVision_install to AliceVision_bundle to solve the issue.
A simple shell script to start the pipeline looks like this:

#!/bin/sh
TIME=$(date +%s)
./../Meshroom_bundle/meshroom_batch -i ./test_images -p ./default_pipeline.mg --cache /opt/Test_data/$TIME

Use an absolute path for the cache folder, otherwise it messes things up. The pipeline should start and run trough without troubles. The output of each node is in the corresponding cache folder.

from pix2model.

ErlerPhilipp avatar ErlerPhilipp commented on June 12, 2024

If I understand this correctly, we need to adapt this config for the editor, right?
Meaning, that we need to store the bounding box for meshing, which we get from the editor, in the config.

Also, we need separate parts for before the editor and after the editor?

from pix2model.

joschi1212 avatar joschi1212 commented on June 12, 2024

Right, currently I am working on the separation of the pipeline into two steps: the sfm part and the reconstruction part. I attach a little test environment in python if some one wants to test it.
test_meshroom.zip

Currently I am struggling with the reconstruction part, the provided input seems wrong, reconstruction fails. I also attach a error and log file.

error.txt
log.txt

When this is fixed I will look into providing a custom bounding box for the reconstruction.

from pix2model.

ErlerPhilipp avatar ErlerPhilipp commented on June 12, 2024

are you sure that it runs through. if i see this right, there are only 34 images. there is a chance that the registration failed completely and SfM produced no results.
if the folder structure is a problem, you could try the publish node. you could probably copy the files from [guid]/sfm etc to a fixed folder.

from pix2model.

joschi1212 avatar joschi1212 commented on June 12, 2024

The images were fine, it seems that meshroom_batch is not meant to be used on incomplete pipelines, meaning 2 separate pipelines that are split like this won't work.
image
image

But we can use a single complete pipeline and with the parameter --toNode we can specify which node should be executed. It will also execute all the node's dependencies and with the correct input parameters it will skip already executed nodes.

With this its very easy to separate the pipelines into 2 steps. Here is the working python script:
meshroom_test.zip

from pix2model.

joschi1212 avatar joschi1212 commented on June 12, 2024

Another problem I encountered is with using the sfmTransform node. The transformation is applied to the point cloud, but then the reconstruction seems to use an inverse transformed version of the pointcloud. There is an issue that says this bug is fixed in the new version, but the bug seems to persist: alicevision/Meshroom#1994

This bug will make transforming the pointcloud a little bit trickier...

from pix2model.

ErlerPhilipp avatar ErlerPhilipp commented on June 12, 2024

So the 2nd cameraInit does use the outputs from the 1st part? Or do you need to copy data manually?

For the bug, I see 2 options:

  1. Make a fork and pull request for this bug. Unless they provide docker images with nightly updates, we would need to pull from the fork and compile ourselves.
  2. Fix the sfmTransform outputs. This would require messing with the SfMData files, which could be done by a small e.g. Python script.

Am I missing anything?

from pix2model.

joschi1212 avatar joschi1212 commented on June 12, 2024

The problem is that the second cameraInit won't take the input from the first part. The provided screen shots are from the setup where it doesn't work. Sorry for the confusion.
The solution is to re-use one complete pipeline and define which nodes should be executed, much like in meshroom where you right click a node and click "compute". Meshroom will execute the selected node and all its dependencies, also skipping nodes that were previously computed, shown here:
image
If we execute Texturing it will skip every green node and start with the PrepareDenseScene node. It also recognizes if one node has changed. That means if we change the bounding box of the Meshing node it will automatically recompute only the last 3 nodes, Meshing, MeshFiltering, Texturing. So a custom bounding box is also working now.

Regarding the bug:
A 3rd option would be skipping translation transforms completely, as rotations and scale transforms seem to work.
Another point would be thinking about dropping the feature, to transform the point cloud, completely, as this would also mean that we need to recompute the whole reconstruction pipeline. Doesn't make much sense to me, when we are also able to transform the reconstructed mesh in a fraction of the time.
I will give this another thought, and maybe we can discuss this further during our next meeting.

from pix2model.

ErlerPhilipp avatar ErlerPhilipp commented on June 12, 2024

too much work for now

from pix2model.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.