Giter Club home page Giter Club logo

hl2tooltracking's People

Contributors

cgsaxner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

hl2tooltracking's Issues

How to use HoloLens's GPU

Hi @cgsaxner, have a good day.

I have tried to develop CV-based application in my HoloLens 2, however the running time is far from my expectation. When I tried to debug via Device Portal, it seems that HL2 is automatically using its CPU to run the program and there is no GPU process at all.

In your paper Inside-Out Instrument Tracking for Surgical Navigation in Augmented Reality, it was mentioned that you were using HoloLens GPU to process the algorithm. Instead of using CPU, I am wondering if I also can use HL2 GPU to process my algorithm.

Could you please guide me on how to use HoloLens2 GPU? I tried to look at the official HL2 website, but I cannot find anything.

Thank you in advance :)

Difficulties with OpenCV library version >= 4.3.0

Good day. I'm having trouble compiling the OpenCV library for the UWP/ARM64 platform with CMake. When compiling for the ARM / ARM 64 platform, there are countless errors. Could you please share the finished OpenCV >= 4.3.0 library for UWP / ARM64 ?

LFToWorld Transform

Hi Christina,

I am working on trying to implement your tracking method and others into the Hololens 2. As a starting point, I have integrated optical tracking into unity and performed a hand-eye calibration such that I can stream in the dynamic transform from a stylus tip to the left front facing camera of the HL2. From what I understand from your paper, the transform from the LF camera to the world is obtained through the HL2 eye calibration but unity does not seem to expose this transform? Do you have a method for how I can obtain this transform?

Regards,
Daniel Allen

Tool not being tracked

Good day!
I need your help and advice. I sort of successfully built a project on OpenCV 4.8.0 for ARM64 and I'm trying to use it in Unity, but my tool is not tracked, the resulting ToolPosition array returns 0000000.

    void Start()
    {
#if ENABLE_WINMD_SUPPORT
        tracking = new HL2ToolTracking();
        tracking.Initialize(SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation().CoordinateSystem);
        tracking.StartTracking();
#endif
    }

    void Update()
    {
#if ENABLE_WINMD_SUPPORT
        string s = "";

        if (tracking != null && tracking.ToolPosition != null)
                foreach (var e in tracking.ToolPosition)
                    s += e + " ";

        text.text = s;
#else
        text.text = "ENABLE_WINMD_SUPPORT is not";
#endif
    }

I was unable to attach infrared LEDs to the glasses in such a way as to get a good reflection from the reflective sphere. I took the opposite approach and mounted LEDs with a diffuser on the tool. From what I can tell by the image, they are very visible.

133360741556973045
WhatsApp Image 2023-07-26 at 12 11 34

It seems to me, perhaps, the problem is in the incorrect identification of the tool's points. I took the coordinates from the model in the Unity coordinate system. Should I use a different coordinate system? And how should I input the coordinates so that, for instance, ToolPosition would return the position of the tool tip?

WhatsApp Image 2023-08-09 at 16 17 18

	// definition of the model points
	cv::Mat pointsModel = (cv::Mat_<float>(4, 3) <<
		0.0f, 0.01023106f, -0.05009925f,
		0.0f, 0.0002310641f, 0.0f,
		0.03f, 0.0002310641f, 0.04990065f,
		-0.0299998f, 0.0002310641f, 0.0849006f);

I really hope for your advice!

P.S.
I'm providing a link to the compiled opencv 4.8.0 for ARM64.
I built only those modules that are required by the project's dependencies. During the compilation, following your instructions, I didn't encounter any errors. However, I'm not sure how to verify that everything is working correctly.

Asking about Stereo Calibration

Hi, thanks a lot for sharing the repository.

I would like to ask about the calibration process that you did. I performed my own-calibration program using python and opencv functions. I compare my calibration results with yours. It is not exactly similar, but the differences are not quite big. And I also notice that there is a sign-difference in the resulted transformation matrix.

This is my result.
"K1": [[373.7949702163523, 0.0, 233.2121359920058], [0.0, 374.78872152463055, 326.7416010655098], [0.0, 0.0, 1.0]],
"D1": [[-0.01524617660770033, 0.06377436171161646, 0.0059206313977930225, -0.0028066593724951575, 0.0]],
"K2": [[372.2177084412977, 0.0, 233.72245228472624], [0.0, 372.74442474930663, 320.77904472579394], [0.0, 0.0, 1.0]],
"D2": [[-0.029994968527599163, 0.08906159374301409, 0.0039004932620613697, -0.002926571337908015, 0.0]],
"R": [[0.9998470404579204, 0.003139374608744723, -0.01720581339558163], [-0.0030159029330480568, 0.9999695503037905, 0.0071974158371902845], [0.017227884868318555, -0.007144423860674444, 0.9998260634683729]],
"T": [[-0.10711754463310794], [0.000244465906715656], [-0.003588584164022485]],

I am wondering about the differences. I am aware that HL2 provides a rotated LF&RF frames in the opposite direction. In my case, I pre-process (clockwise-&counter-clockwise rotation) those images to have the same view first, and then process stereo-calibration. Did you also initially perform this pre-process step or just using the raw images captured from the sensors?

One more thing, I notice that the sign in your transformation matrix (RF-to-LF) is quite similar with the extrinsic parameters provided by the sensor. Did you perhaps perform some negation/inversion process to your stereo-calibration result? Or it was already in those condition?

Thank you very much and I'm sorry if these questions sound trivial :)
-Nova

Big offset when rendering the corresponding models

Hi @cgsaxner,

Thank you for sharing your code.
Instead of using the sphere markers, I used a flat retro-reflective markers. And then, I tried to integrate your code with unity to visualize the corresponding models. Using the returned toolPosition, I updated the models in unity using the conversion of right-handed to left-hand coordinate system.
However, I got a really strange result with big offset and wrong orientation. When I put the marker in vertical plane, the virtual models appeared close to the markers but with a wrong orientation.
Meanwhile, when I put the markers in horizontal plane, the models appeared far from the markers. They appeared far-up above the marker. Furthermore, the models seem to render w.r.t. head position. Even though the markers were still in the same position, but with a different height position (higher/lower), the models were rendered far above from the marker.

Do you have any idea what's the problem? Thank you in advance.

-Nova

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.