Comments (8)
I guess you want to ask that why I use the edited frame to optimize NVF rather than directly adopting the edited frames as the edited results right?
That is because the edited frames generated by IP2P+ is not temporally consistent and cannot be concated to a coherent video.
I cannot understant your last question: Did you do the ablations about the the additional optimization of the NVF in the second stage?. Maybe you can further explain it in more detail :)
from nvedit.
I guess you want to ask that why I use the edited frame to optimize NVF rather than directly adopting the edited frames as the edited results right?
That is because the edited frames generated by IP2P+ is not temporally consistent and cannot be concated to a coherent video.
I cannot understant your last question: Did you do the ablations about the the additional optimization of the NVF in the second stage?. Maybe you can further explain it in more detail :)
Hi! Thanks for your attention, and maybe I can clarify my question:
- At the inference you only use NVF, but how can you inject the instruction/prompt/text information for editing?
- As for my understanding, NVF could reconstruct the video, and why you input the rendered frame to IP2P+?
from nvedit.
Notice that NVF itself does not have any capability of editing, we impart editing effect in the second stage (field editing stage) of training.
1、In the second stage, we have optimized NVF by edited frames, which are generated based on the prompt information. So we can directly use NVF to render the video with the desired editing effect after training.
2、IP2P+ is required to edit the rendered frames in the second stage. Our editing effect mainly depends on it.
Maybe you can read the paragraph of "Field editing stage" in Sec. 3.2 of our paper :)
Thanks for your attention on our work, if you have any questions, please tell me and I will try my best to answer.
from nvedit.
So when inferencing you will input the instruction/prompt/text to the NVF, and dont need IP2P+? Or for different videos, you need to optimize the NVF respectively?
from nvedit.
During inference, there is no need to input any instruction to NVF. We only input the coordinates and get pixels, as shown in the Eq. 6 of our paper.
For different videos, we need to retrain NVF :)
from nvedit.
Excuse me :) May I ask if my response has solved your questions?
from nvedit.
Yes I got it, and so nice of u. Sorry for replying late cause I was on a business trip.
from nvedit.
I will temporally close this issue and you are welcome to reopen it if you need :)
from nvedit.
Related Issues (5)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nvedit.