Giter Club home page Giter Club logo

Comments (11)

imerse avatar imerse commented on May 24, 2024 1

First of all, thank you very much for your patience. Now I have a general understanding of the code. I modified the vshader and fshader in FacePainter.cpp to read a texture and then render it to iris in the method annotateEye. It seems to be working.

from drishti.

headupinclouds avatar headupinclouds commented on May 24, 2024

Hi~could you please give me some advices or hints about how could I hide the detected drawn curves in the demo and add some self-define effects with the eye-related key positions?

You can disable annotations from the SDK via the drishti::Context class here. This just instantiates the base FaceTracker class where the virtual drawing methods are empty here.

If you want to add your own effects at the SDK level you could register your callbacks to receive the frame texture and face models, but you would have to draw to your own display.

If you want to draw on the display that is managed by the demo app, you would currently have to modify the internal classes. You could start by looking at drishti::hci::FaceFinderPainter::paint() and the drishti::hci::FacePainter utility class. The SDK could also be modified to support drawing of a user provided texture ID fairly easily.

The current wireframe drawing is really just for POC visualization.

from drishti.

imerse avatar imerse commented on May 24, 2024

Thank you for you advice! I've got some ideas about the process flow.
I find that in this hci.cpp,it handles each frame of a video which can get each frame.image.
I guess here is the handler of each frame?
Could you please tell me how can I get the cv::Mat format frame image here? I try to draw a mask on each frame directly.

update: I think the main interaction is that the drawFrame func passes the texid to jni, and then all the processing is based on the image information obtained from the texid;

gray.process(texId, 1, inputTextureTarget);
if(m_tracker)
{
drishti::sdk::VideoFrame frame({frameSize.width,frameSize.height}, nullptr, false, gray.getOutputTexId(), TEXTURE_FORMAT);
auto outputTex = (*m_tracker)(frame);
disp.process(outputTex, 1, GL_TEXTURE_2D);
}

The m_tracker(FaceTracker) does all the processing and painting, what does the gray.process and disp.process do in detail? I didn't find ogles_gpgpu::GrayscaleProc.process or ogles_gpgpu::Disp.process in the ogles_gpgpu project.

I think after this process, the outputScene.image() can return a grayscale image of the original frame? I need to get the original rgb image from the texid passed from jni. How can I get that?

Thank you for your time!

from drishti.

ruslo avatar ruslo commented on May 24, 2024

And I wonder what is the main difference between the drishti/android-studio and drishti/src/examples/facefilter/android-studio. Why should we build from the former rather than the latter?

Kind of a development vs production.

the entire src/examples tree is designed to be relocatable, you
can cp -r src/examples ${HOME}/drishti_examples, customize, and
build, by simply updating the drishti package details
There is another entry point for Android Studio - src/examples/facefilter/android-studio.
It should be used only for testing or as a template for starting your own project based on Drishti.

from drishti.

imerse avatar imerse commented on May 24, 2024

@ruslo Thank you so much!

from drishti.

imerse avatar imerse commented on May 24, 2024

@headupinclouds @ruslo Could you give me some tips about the upper question please? It will be really appreciated!

from drishti.

headupinclouds avatar headupinclouds commented on May 24, 2024

Could you give me some tips about the upper question please? It will be really appreciated!

I can help more tomorrow morning. In addition to the questions above, you are primarily looking for an understanding of the basic pipeline so that you can add your own drawing code, Right?

FYI: You can build and run tests on your host machine. That demonstrates the callback tables and basic SDK usage and it might be easier to start with that if you are making changes.

from drishti.

imerse avatar imerse commented on May 24, 2024

Thanks. I originally intended to get the original color map to do some effect on the eye area. I thought about it, maybe I can read a texture directly and paste it with opengl. I will give it a try.
At the same time, I will also try those tests in order to understand more deeply.

from drishti.

imerse avatar imerse commented on May 24, 2024

Now I can get the contour of the pupils. Could you please give me some more advices about how can I add the texture to the area wrapped by the contour? Most of the final opengl rendering processes are in this file FacePainter.cpp, right?

from drishti.

headupinclouds avatar headupinclouds commented on May 24, 2024

Most of the final opengl rendering processes are in this file FacePainter.cpp, right?

Yes. FaceFinderPainter is a variant of FaceFinder that implements the FaceFinder::paint() virtual method. It has a FacePainter class internally, which is just a utility class to do some default wireframe rendering along with a stabilized eye view. That's mostly to support a demo, and a user should be able to instantiate the base FacePainter class if they don't care about the display (or they want to do their own). Most of the drawing actually occurs in the FacePainter class. The goal is to do all the shader compatible processing on the GPU, and to keep one CPU thread busy processing at the lowest resolution needed to minimize processing and transfer overhead. The resulting models are then scaled and rendered on the full resolution frames that are already on the GPU. That's what the FaceFinder::runFast method is doing.

You can look at FacePainter::annotateEye and potentially modify that to do what you want. I would start by hacking that code to get the effect you want, and then maybe think about updating the SDK to support generic annotations. The current built-in wireframe rendering does have some drawing preparation that occurs on the CPU for better thread utilization, which helps achieve higher frame rates on some older phones (here).

For a generic approach you could update the API to support what you want to do. In the facefilter example the FaceTrackTest provides a sample callback table implementation. The FaceTrackTest::triggerFunc is called on every frame with a lightweight vector of just the face + eye models that were found in the scene. The callback is implemented with the C++ FaceTrackTest::trigger function. You can add some fast conditions and monitors there, such as position or distance checks, and when you see something you like, you can return a non-null drishti_request_t that specifies the # of frames of history you would like (bounded by the max # you configured in the SDK) along with boolean flags indicating whether you want to grab the full (face) frames and stabilized eye crops as images and/or textures. Note that requesting the images is slightly expensive and it should only be done as a special case for some CPU analysis or image capture. If you do it every frame, it will be slow. The same thing is done for the stabilized eye textures. You can grab them as images and/or textures too.

Here is the drishti_request_t that is used in the example. It doesn't request anything, and isn't a very interesting example

return // formulate a frame request based on input faces
{
0, // Retrieve the last N frames.
false, // Get frames in user memory.
true, // Get OpenGL textures.
true, // Grab full frame images/textures.
true, // Grab eye pair images/textures.
};

If you actually request frames and/or textures, then the code will pull down any requested images and the main callback will be called. That is FaceTrackTest::callbackFunc in this example. That is implemented by the FaceTrackTest::callback C++ method.

I hope that helps. Let me know if something isn't clear.

from drishti.

headupinclouds avatar headupinclouds commented on May 24, 2024

Actually, the drishti_hunter_test repository has a more interesting callback table example, which probably illustrates the workflow a little better. I'll inline that here for clarity:

// Here we would typically add some critiera required to trigger a full capture
// capture event.  We would do this selectively so that full frames are not
// retrieved at each stage of processing. For example, if we want to capture a s
// selfie image, we might monitor face positions over time to ensure that the
// subject is relatively centered in the image and that there is fairly low
// frame-to-frame motion.
drishti_request_t FaceTrackTest::trigger(const drishti_face_tracker_result_t& faces, double timestamp, std::uint32_t tex)
{
    m_impl->logger->info("trigger: Received results at time {}}", timestamp);

    if (m_impl->display)
    {
        m_impl->updatePreview(tex);
    }

    if (shouldCapture(faces))
    {
        // clang-format off
        return // Here we formulate the actual request, see drishti_request_t:
        {
            3,    // Retrieve the last N frames
            true, // Get frames in user memory
            true, // Get frames as texture ID's
            true, // Get full frame images
            true  // Get eye crop images
        };
        // clang-format on
    }

    return { 0 }; // otherwise request nothing!
}

from drishti.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.