Giter Club home page Giter Club logo

Comments (3)

VladimirYugay avatar VladimirYugay commented on June 12, 2024

Hey there!

  1. Our method supports RGBD input only. Potentially, you can put some other pre-trained depth estimator and run the method. For example, AdaBins (supports higher image resolution), or official AdaBins implementation for depth estimation might be a good start
  2. So you would run the method, then you'll get segments of the scene, and then you merge them (we'll provide a script for that). After, you'll be able to render the reconstructed scene in real-time

from gaussian-slam.

DiamondGlassDrill avatar DiamondGlassDrill commented on June 12, 2024
  1. Thanks @VladimirYugay that sounds perfect and thanks for the additional comment about AdaBins. This will be really helpful to advance my current project. Another question of my side would be.

  2. Regarding the room scene demonstrated in the video, am I correct in understanding that it was not recorded and live-stitched as depicted? Instead, only a pre-rendered room is presented in the step-by-step procedure, correct? Thus, would it be possible for me to capture photos or videos and subsequently utilize the described method and merging process to visualize the scene? I'm also seeking clarification on whether it's currently feasible to view an image frame and instantly create Gaussian Splatting from this input in real-time, followed by performing SLAM on a sequential frame-by-frame basis?

  3. Do you already know, if it will be possible to release the code under Apache 2.0 or MIT license?

Thanks in advance and happy holidays.

from gaussian-slam.

VladimirYugay avatar VladimirYugay commented on June 12, 2024
  1. I am glad that it helped!

  2. We recorded the video of the scene after the segments were stitched. Potentially, you can implement stitching in parallel as SLAM reconstructs the segments, but we didn't do it due to time constraints. Regarding a custom RGB video, I think it is doable. I can imagine that it would work as follows: you record a video, run dense depth prediction on it for every frame, run SLAM on it (you will be able to plug any tracker that you want e.g. Orb-SLAM, Droid-SLAM, etc.), stitch the segments, and then obtain the scene which is renderable in real-time.

  3. Regarding licensing it's a pain point right now - I just don't know. It is also the main reason why we can't release everything straight away.

from gaussian-slam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.