hiukim / mind-ar-js Goto Github PK
View Code? Open in Web Editor NEWWeb Augmented Reality. Image Tracking, Face Tracking. Tensorflow.js
License: MIT License
Web Augmented Reality. Image Tracking, Face Tracking. Tensorflow.js
License: MIT License
The source code example Events Handling in the documentation has a couple of mistakes:
arSystem.pause(true); // pause AR and video
is incorrect - the parameter needs to be false
to stop AR and freeze the video.Keep up the good work. This is such a cool library!
Hi @hiukim, we've been trying to replicate the curved images tracking feature offered by the 8th wall with the mind ar library.
We tried using a curved plane and curved image primitives offered by a-frame, but in both cases, the edges of the mesh appear to distort when the device is moved wrt the marker.
Here a video recording of the same issue.
I'm curious to know your thoughts on the issue and any workaround you can suggest to overcome this?
Considering that mind-ar has a huge potential and that use case is very usual, I'd like to help you create such tutorial/documentation page.
Do you already have in mind what changes are required to do so?
You mentioned that "If you want to do something like this, you can approach the problem by using the non-aframe library build, then include and modify the above aframe.js script", but it wasn't that clear to jump into the code and do it.
Any guidelines?
Hey
(first of: wonderful project, came here from ar.js and I am amazed by the ease of setup and stability in tracking).
I was thinking of using mind-ar-js to AR-enrich a book. This means ~50 targets have to be recognized by the app. I set up a test with only 6 targets and immediately Safari on iPhone X and Chrome on a Galaxy A5 crash. The multi-targets example app works on both devices - but it has only two targets. I managed to get it to load on my iPad Pro, but that is not what end users would use.
Is there any way of getting 10+ targets to work?
Moreover, is it necessary to put all the tracking targets into the same targets.mind-file? This seems to be the bottleneck here, as even with only 6 targets it grew to 32MB. Moreover, they are difficult to compile and it would be hard to add or remove individual targets at a later point. One .mind-file per target would seem much easier to use and address in the app.
Any feedback or help is highly appreciated.
Best, Matthias
I created a new tracker image using the Image Targets compiler and when using this image I get an error in the console saying "Your compiled .mind might be outdated. Please recompile". I also notice that the file size for this image is much smaller than my old tracker image created last week. Even creating a new tracker image using my old file results in the same error message, and a much smaller filesize.
I tested the new version with tensorflow, i get this error:
A-Frame Version: 1.0.4 (Date 2020-02-05, Commit #2b359246)
mindar.prod.js:12858 three Version (https://github.com/supermedium/three.js): ^0.111.6
mindar.prod.js:12858 WebVR Polyfill Version: ^0.10.10
mindar.prod.js:12680 THREE.WebGLRenderer: WEBGL_depth_texture extension not supported.
get @ mindar.prod.js:12680
mindar.prod.js:12680 THREE.WebGLRenderer: OES_texture_float_linear extension not supported.
get @ mindar.prod.js:12680
mindar.prod.js:12658 video ready... <video autoplay muted playsinline style="position: absolute; top: 0px; left: -32px; z-index: -2; width: 384px; height: 512px;" width="480" height="640"></video>
mindar.prod.js:10332 Could not get context for WebGL version 2
mindar.prod.js:10348 1
2 precision highp float;
3 precision highp int;
4 precision highp sampler2D;
5 varying vec2 resultUV;
6
7 const vec2 halfCR = vec2(0.5, 0.5);
8
9 struct ivec5
10 {
11 int x;
12 int y;
13 int z;
14 int w;
15 int u;
16 };
mindar.prod.js:10348 Fragment shader compilation failed.
mindar.prod.js:10348 17
mindar.prod.js:10348 18 struct ivec6
19 {
20 int x;
21 int y;
22 int z;
23 int w;
24 int u;
25 int v;
26 };
27
28 uniform float NAN;
29
30 #define isnan(value) isnan_custom(value)
31 bool isnan_custom(float val) {
32 return (val > 0. || val < 1. || val == 0.) ? false : true;
33 }
34 bvec4 isnan_custom(vec4 val) {
35 return bvec4(isnan(val.x), isnan(val.y), isnan(val.z), isnan(val.w));
36 }
37
38
39 uniform float INFINITY;
40
41 bool isinf(float val) {
42 return abs(val) == INFINITY;
43 }
44 bvec4 isinf(vec4 val) {
45 return equal(abs(val), vec4(INFINITY));
46 }
47
48
49 int round(float value) {
50 return int(floor(value + 0.5));
51 }
52
53 ivec4 round(vec4 value) {
54 return ivec4(floor(value + vec4(0.5)));
55 }
56
57
58 int imod(int x, int y) {
59 return x - y * (x / y);
60 }
61
62 int idiv(int a, int b, float sign) {
63 int res = a / b;
64 int mod = imod(a, b);
65 if (sign < 0. && mod != 0) {
66 res -= 1;
67 }
68 return res;
69 }
70
71 //Based on the work of Dave Hoskins
72 //https://www.shadertoy.com/view/4djSRW
73 #define HASHSCALE1 443.8975
74 float random(float seed){
75 vec2 p = resultUV * seed;
76 vec3 p3 = fract(vec3(p.xyx) * HASHSCALE1);
77 p3 += dot(p3, p3.yzx + 19.19);
78 return fract((p3.x + p3.y) * p3.z);
79 }
80
81
82 vec2 uvFromFlat(int texNumR, int texNumC, int index) {
83 int texR = index / texNumC;
84 int texC = index - texR * texNumC;
85 return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
86 }
87 vec2 packedUVfrom1D(int texNumR, int texNumC, int index) {
88 int texelIndex = index / 2;
89 int texR = texelIndex / texNumC;
90 int texC = texelIndex - texR * texNumC;
91 return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
92 }
93
94
95 vec2 packedUVfrom2D(int texelsInLogicalRow, int texNumR,
96 int texNumC, int row, int col) {
97 int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2);
98 int texR = texelIndex / texNumC;
99 int texC = texelIndex - texR * texNumC;
100 return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
101 }
102
103
104 vec2 packedUVfrom3D(int texNumR, int texNumC,
105 int texelsInBatch, int texelsInLogicalRow, int b,
106 int row, int col) {
107 int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2);
108 int texR = index / texNumC;
109 int texC = index - texR * texNumC;
110 return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
111 }
112
113
114
115 float sampleTexture(sampler2D textureSampler, vec2 uv) {
116 return texture2D(textureSampler, uv).r;
117 }
118
119
120 void setOutput(vec4 val) {
121 gl_FragColor = val;
122 }
123
124 uniform sampler2D A;
125 uniform int offsetA;
126
127 ivec3 getOutputCoords() {
128 ivec2 resTexRC = ivec2(resultUV.yx *
129 vec2(160, 160));
130 int index = resTexRC.x * 160 + resTexRC.y;
131
132 int b = index / 25500;
133 index -= b * 25500;
134
135 int r = 2 * (index / 510);
136 int c = imod(index, 510) * 2;
137
138 return ivec3(b, r, c);
139 }
140
141
142
143 float getA(int row, int col) {
144 vec2 uv = (vec2(col, row) + halfCR) / vec2(1020.0, 100.0);
145 return sampleTexture(A, uv);
146 }
147
148 float getA(int row, int col, int depth) {
149 return getA(col, depth);
150 }
151
152 float getAAtOutCoords() {
153 ivec3 coords = getOutputCoords();
154
155 return getA(coords.x, coords.y, coords.z);
156 }
157
158
159 ivec3 outCoordsFromFlatIndex(int index) {
160 int r = index / 102000; index -= r * 102000;int c = index / 1020; int d = index - c * 1020;
161 return ivec3(r, c, d);
162 }
163
164 void main() {
165 ivec2 resTexRC = ivec2(resultUV.yx *
166 vec2(160, 160));
167 int index = 4 * (resTexRC.x * 160 + resTexRC.y);
168
169 vec4 result = vec4(0.);
170
171 for (int i=0; i<4; i++) {
172 int flatIndex = index + i;
173 ivec3 rc = outCoordsFromFlatIndex(flatIndex);
174 result[i] = getA(rc.x, rc.y, rc.z);
175 }
176
177 gl_FragColor = result;
178 }
179
mindar.prod.js:10348 Uncaught (in promise) Error: Failed to compile fragment shader.
at mf (mindar.prod.js:10348)
at dg.createProgram (mindar.prod.js:10911)
at mindar.prod.js:11440
at mindar.prod.js:11440
at vb.getAndSaveBinary (mindar.prod.js:11440)
at vb.runWebGLProgram (mindar.prod.js:11440)
at vb.decode (mindar.prod.js:11440)
at vb.getValuesFromTexture (mindar.prod.js:11440)
at vb.readSync (mindar.prod.js:11440)
at g.readSync (mindar.prod.js:4141)
Tested with a Wiko View, with Android 7.1.2. Chrome 87.0.4280.141.
Something like this.
its it possible to keep object on screen after losing the marker ? and the object will set it self in the middle of the screen in front of the camera !
It tracks the marker at index 0 perfectly. But any other index doesn't work.
Strangely i can reorder the markers when making it, and it is always the one at 0 that functions as expected. So its not the image that is the issue here.
Just downloaded your latest to double check (as i had made changes), but still the same behaviour.
Hi @hiukim, do you have any recommendations about the ideal minimum and maximum size (resolution) of the image that should be compiled as a target for efficient tracking?
Hello. This tool is great.
But if someone opens the web in some browser not supported like in-app-browser, would there be any hints for reminding user to open the right browser to view the effect?
Sometimes the complied image covers up the download button partially or completely. It can be tested with this image:
https://raw.githubusercontent.com/devhims/mind-ar-examples/main/assets/maggi.png
Hi Hiukim,
could you maybe provide an example on how to use mind-ar with just vanilla threejs. That would be awesome. Thank you
Calling arSystem.pause(false) switches off the camera feed in the a-frame, but the camera remains active. At least on iOS, the status-light remains switched on. For user-security reasons it would be good if it could be switched off completely during pause, or at least optionally. Especially as the camera is also a drain on energy consumption.
In view of having lots of unknown bugs from gpu.js, I'm trying to replace it with another library for the webgl part.
I came up with this interesting idea of using tensorflow while discussing with my colleague. There is a very solid foundation of of api that utilize webgl in tensorflow, so I decided to give it a try. Nope, we are not using it for Machine Learning. :D
If it works out, it will solve most of the issues.
Hey @hiukim, I'm so glad I found this repo. You've done really impressive work. I'll explore some of the examples and will let you know how it goes. Cheers mate! ✌️
Is it possible to set up the camera to grasp images at a distance of 0.5-1 meter?
i want to use this repo for green screen video along with Mind-AR. But its not compatible.Please help
Hi all...
Is it possible to track the image when the image is farther? I mean, to detect a target image, it is necesary to be quite near or have almost 50% of the camera screen with the image. Is is possible to track the image being farther or at least to keep the 3D model when the camera goes farther?
Thanks!!!
Hello, I am new to programming and augmented reality, I am studying your code a bit, but I find the problem that the video starts when loading the page and not when recognizing the augmented reality image, some example or advice where I modify this thank you very much.
`
<script>
const showInfo = () => {
let y = 0;
const profileButton = document.querySelector("#profile-button");
const webButton = document.querySelector("#web-button");
const emailButton = document.querySelector("#email-button");
const locationButton = document.querySelector("#location-button");
const text = document.querySelector("#text");
profileButton.setAttribute("visible", true);
setTimeout(() => {
webButton.setAttribute("visible", true);
}, 300);
setTimeout(() => {
emailButton.setAttribute("visible", true);
}, 600);
setTimeout(() => {
locationButton.setAttribute("visible", true);
}, 900);
let currentTab = "";
webButton.addEventListener("click", function(evt) {
text.setAttribute("value", "https://softmind.tech");
currentTab = "web";
});
emailButton.addEventListener("click", function(evt) {
text.setAttribute("value", "[email protected]");
currentTab = "email";
});
profileButton.addEventListener("click", function(evt) {
text.setAttribute("value", "AR, VR solutions and consultation");
currentTab = "profile";
});
locationButton.addEventListener("click", function(evt) {
console.log("loc");
text.setAttribute("value", "Vancouver, Canada | Hong Kong");
currentTab = "location";
});
text.addEventListener("click", function(evt) {
if (currentTab === "web") {
window.location.href = "https://softmind.tech";
}
});
};
const showPortfolio = done => {
const portfolio = document.querySelector("#portfolio-panel");
const portfolioLeftButton = document.querySelector(
"#portfolio-left-button"
);
const portfolioRightButton = document.querySelector(
"#portfolio-right-button"
);
const paintandquestPreviewButton = document.querySelector(
"#paintandquest-preview-button"
);
let y = 0;
let currentItem = 0;
portfolio.setAttribute("visible", true);
const showPortfolioItem = item => {
for (let i = 0; i <= 2; i++) {
document
.querySelector("#portfolio-item" + i)
.setAttribute("visible", i === item);
}
};
const id = setInterval(() => {
y += 0.008;
if (y >= 0.6) {
clearInterval(id);
portfolioLeftButton.setAttribute("visible", true);
portfolioRightButton.setAttribute("visible", true);
portfolioLeftButton.addEventListener("click", () => {
currentItem = (currentItem + 1) % 3;
showPortfolioItem(currentItem);
});
portfolioRightButton.addEventListener("click", () => {
currentItem = (currentItem - 1 + 3) % 3;
showPortfolioItem(currentItem);
});
paintandquestPreviewButton.addEventListener("click", () => {
paintandquestPreviewButton.setAttribute("visible", false);
const testVideo = document.createElement("video");
const canplayWebm = testVideo.canPlayType(
'video/webm; codecs="vp8, vorbis"'
);
if (canplayWebm == "") {
document
.querySelector("#paintandquest-video-link")
.setAttribute("src", "#paintandquest-video-mp4");
document.querySelector("#paintandquest-video-mp4").play();
} else {
document
.querySelector("#paintandquest-video-link")
.setAttribute("src", "#paintandquest-video-webm");
document.querySelector("#paintandquest-video-webm").play();
}
});
setTimeout(() => {
done();
}, 500);
}
portfolio.setAttribute("position", "0 " + y + " -0.01");
}, 10);
};
const showAvatar = onDone => {
const avatar = document.querySelector("#avatar");
let z = -0.3;
const id = setInterval(() => {
z += 0.008;
if (z >= 0.3) {
clearInterval(id);
onDone();
}
avatar.setAttribute("position", "0 -0.25 " + z);
}, 10);
};
AFRAME.registerComponent("mytarget", {
init: function() {
this.el.addEventListener("targetFound", event => {
console.log("target found");
showAvatar(() => {
setTimeout(() => {
showPortfolio(() => {
setTimeout(() => {
showInfo();
}, 300);
});
}, 300);
});
});
this.el.addEventListener("targetLost", event => {
console.log("target found");
});
//this.el.emit('targetFound');
}
});
</script>
<style>
body {
margin: 0;
}
.example-container {
overflow: hidden;
position: absolute;
width: 100%;
height: 100%;
}
#example-scanning-overlay {
display: flex;
align-items: center;
justify-content: center;
position: absolute;
left: 0;
right: 0;
top: 0;
bottom: 0;
background: transparent;
z-index: 2;
}
@media (min-aspect-ratio: 1/1) {
#example-scanning-overlay .inner {
width: 50vh;
height: 50vh;
}
}
@media (max-aspect-ratio: 1/1) {
#example-scanning-overlay .inner {
width: 80vw;
height: 80vw;
}
}
#example-scanning-overlay .inner {
display: flex;
align-items: center;
justify-content: center;
position: relative;
background: linear-gradient(to right, white 10px, transparent 10px) 0 0,
linear-gradient(to right, white 10px, transparent 10px) 0 100%,
linear-gradient(to left, white 10px, transparent 10px) 100% 0,
linear-gradient(to left, white 10px, transparent 10px) 100% 100%,
linear-gradient(to bottom, white 10px, transparent 10px) 0 0,
linear-gradient(to bottom, white 10px, transparent 10px) 100%
0,
linear-gradient(to top, white 10px, transparent 10px) 0 100%,
linear-gradient(to top, white 10px, transparent 10px) 100%
100%;
background-repeat: no-repeat;
background-size: 40px 40px;
}
#example-scanning-overlay.hidden {
display: none;
}
#example-scanning-overlay img {
opacity: 0.6;
width: 90%;
align-self: center;
}
#example-scanning-overlay .inner .scanline {
position: absolute;
width: 100%;
height: 10px;
background: white;
animation: move 2s linear infinite;
}
@keyframes move {
0%,
100% {
top: 0%;
}
50% {
top: calc(100% - 10px);
}
}
</style>
<a-scene
mindar="imageTargetSrc: https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/assets/card-example/card.mind; showStats: false; uiScanning: #example-scanning-overlay;"
embedded
color-space="sRGB"
renderer="colorManagement: true, physicallyCorrectLights"
vr-mode-ui="enabled: false"
device-orientation-permission-ui="enabled: false"
>
<a-assets>
<img id="card" src="./assets/card-example/card.png" />
<img id="icon-web" src="./assets/card-example/icons/web.png" />
<img
id="icon-location"
src="./assets/card-example/icons/location.png"
/>
<img
id="icon-profile"
src="./assets/card-example/icons/profile.png"
/>
<img id="icon-phone" src="./assets/card-example/icons/phone.png" />
<img id="icon-email" src="./assets/card-example/icons/email.png" />
<img
id="icon-play"
src="https://cdn.glitch.com/b38eb9a2-8d3c-4e64-998d-6d0738b4c845%2Fplay.png?v=1616294545285"
/>
<img id="icon-left" src="./assets/card-example/icons/left.png" />
<img id="icon-right" src="./assets/card-example/icons/right.png" />
<img
id="paintandquest-preview"
src="./assets/card-example/portfolio/paintandquest-preview.png"
/>
<video
id="paintandquest-video-mp4"
autoplay="false"
loop="true"
src="https://cdn.glitch.com/d854003b-b32d-455a-98db-95fe418cab4c%2Fpaintandquest.mp4?v=1616389137177"
></video>
<video
id="paintandquest-video-webm"
autoplay="false"
loop="true"
src="https://cdn.glitch.com/d854003b-b32d-455a-98db-95fe418cab4c%2Fpaintandquest.webm?v=1616389074156"
></video>
<img
id="coffeemachine-preview"
src="./assets/card-example/portfolio/coffeemachine-preview.png"
/>
<img
id="peak-preview"
src="./assets/card-example/portfolio/peak-preview.png"
/>
<a-asset-item
id="avatarModel"
src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/assets/card-example/softmind/scene.gltf"
></a-asset-item>
</a-assets>
<a-camera
position="0 0 0"
look-controls="enabled: false"
cursor="fuse: false; rayOrigin: mouse;"
raycaster="far: 10000; objects: .clickable"
>
</a-camera>
<a-entity id="mytarget" mytarget mindar-image-target="targetIndex: 0">
<a-plane
src="#card"
position="0 0 0"
height="0.552"
width="1"
rotation="0 0 0"
></a-plane>
<a-entity visible="false" id="portfolio-panel" position="0 0 -0.01">
<a-text
value="Portfolio"
color="black"
align="center"
width="2"
position="0 0.4 0"
></a-text>
<a-entity id="portfolio-item0">
<a-video
id="paintandquest-video-link"
webkit-playsinline
playsinline
width="1"
height="0.552"
position="0 0 0"
></a-video>
<a-image
id="paintandquest-preview-button"
class="clickable"
src="#paintandquest-preview"
alpha-test="0.5"
position="0 0 0"
height="0.552"
width="1"
>
</a-image>
</a-entity>
<a-entity id="portfolio-item1" visible="false">
<a-image
class="clickable"
src="#coffeemachine-preview"
alpha-test="0.5"
position="0 0 0"
height="0.552"
width="1"
>
</a-image>
</a-entity>
<a-entity id="portfolio-item2" visible="false">
<a-image
class="clickable"
src="#peak-preview"
alpha-test="0.5"
position="0 0 0"
height="0.552"
width="1"
>
</a-image>
</a-entity>
<a-image
visible="false"
id="portfolio-left-button"
class="clickable"
src="#icon-left"
position="-0.7 0 0"
height="0.15"
width="0.15"
></a-image>
<a-image
visible="false"
id="portfolio-right-button"
class="clickable"
src="#icon-right"
position="0.7 0 0"
height="0.15"
width="0.15"
></a-image>
</a-entity>
<a-image
visible="false"
id="profile-button"
class="clickable"
src="#icon-profile"
position="-0.42 -0.5 0"
height="0.15"
width="0.15"
animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
></a-image>
<a-image
visible="false"
id="web-button"
class="clickable"
src="#icon-web"
alpha-test="0.5"
position="-0.14 -0.5 0"
height="0.15"
width="0.15"
animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
></a-image>
<a-image
visible="false"
id="email-button"
class="clickable"
src="#icon-email"
position="0.14 -0.5 0"
height="0.15"
width="0.15"
animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
></a-image>
<a-image
visible="false"
id="location-button"
class="clickable"
src="#icon-location"
position="0.42 -0.5 0"
height="0.15"
width="0.15"
animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
></a-image>
<a-gltf-model
id="avatar"
rotation="0 0 0"
position="0 -0.25 0"
scale="0.004 0.004 0.004"
src="#avatarModel"
></a-gltf-model>
<a-text
id="text"
class="clickable"
value=""
color="black"
align="center"
width="2"
position="0 -1 0"
geometry="primitive:plane; height: 0.1; width: 2;"
material="opacity: 0.5"
></a-text>
</a-entity>
</a-scene>
</div>
hello dear
when i add more than 2 image for compile, and when i add targets.mind to project.
project not run and show loading...
please help me to solve the problem.
Your image target tool is not working. After uploading the photo nothing is happening after clicking on start button. I had wait for around 30 minutes nothing was happened
If you give me a few pointers to start, I will look at making Gyroscope take over when marker is lost for a limited time.
When I try to compile an image from the targetCompiler (https://hiukim.github.io/mind-ar-js-doc/tools/compile/) it failed at 50%.
I'm using Firefox
I tried with 2 images without success. Here is the error from the console if it can help.
I tried with this image :
And this one :
Any idea where I can compile file other than the website ?
Trying the facetracking examples, I get this message
In the source code there are lines like
<script src="../../dist-dev/mindar-face.js"></script>But the folder dist-dev doesn't exist, nor does the file mindar-face.js. The folder that exists is just "dist" and the file is "mindar-face.prod.js". Even changing these things I still get the error. Would love to try out the facetracking, but I can't get it to work.
I know that I need to add some code here:
exampleTarget.addEventListener("targetLost", event => {
console.log("target lost");
});
i tried to add this:
exampleTarget.addEventListener("targetLost", event => { arSystem.el.object3D.visible = true; console.log("target lost"); });
but it didnt work
Hi,
I am trying to capture screenshot of the scene & camera.
When I tried to access the camera texture using:
document.querySelector('video');
This is returning a black screen.
Any ideas how to correct this issue?
Thanks
I developed an AR experience using version 0.4.2 together with the aframe-animation-timeline-component. After updating to 1.0.0 everything stopped working. I don't get the loading spinner animation and no errors in the console. Did something change?
Hi guys,
I'm pretty new with programing and this platform. So, sorry for my basic question.
But I was trying the examples and when you put your cell phone sideways (rotate), the view is not correct anymore. You can only see the camera image in the lowerleft corner of the browser. The problem appears when you change from one orientation to another (vertical to horizontal for example).
Here I attached an image.
Does it happen to you? How can I solve this?
Thank you very much for your help.
in chrome i got this warn :
mindar.prod.js:2 [Violation] Added non-passive event listener to a scroll-blocking 'touchstart' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952
and the click event not working
Something like this.
can not open camera in android edge browser . app version is 96.0.1043.0 and its ok in old version of edge.
The scanning screen provides clear gui-feedback to the user that scanning is going on. As such, it is very helpful. However, once a target-match has been found, it won't come back.
Is there any way of making it show up again after a targetLost-event?
Hi @hiukim, I challenged myself to use your library in a React project. I used jsDeliver to host the library on a CDN. And then created a component with the source code of example 1 from this repo. Unfortunately, I didn't succeed and got the following error:
Any tips to tackle this? 🙂
You can find my repo here.
Hi,
Is there any way to show an animated gif in MindAR? I am searching for this for long time and I found https://github.com/mayognaise/aframe-gif-shader, but is not compatible with mind ar. Please help me
Hi Actually I want to trigger several actions when the marker is detected and the marker is lost, But not able to do so. Can you please let me know how to achieve it.
Hi, I'm new to mind-ar-js and Javascript. I have to create an AR experience where after having detected the target the AR experience starts, and I want it to continue even after the target is lost. Specifically, I have a video playing, and when I lose the target the audio keeps playing but the video is no longer visible, so I want the whole a-entity to be retained on screen. This is a snippet of the code I'm using. I'll be glad for any help!
AFRAME.registerComponent('mytarget-one', {
init: function () {
this.el.addEventListener('targetFound', event => {
console.log("target found");
setTimeout(() => {
setTimeout(() => {
showInfo();
}, 100);
}, 100);
});
this.el.addEventListener('targetLost', event => {
console.log("target lost");
//stopInfoOne();
});
this.el.emit('targetFound');
}
});
<a-scene mindar="imageTargetSrc: multitargets.mind; showStats: false;" embedded color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
<a-assets>...</a-assets>
<a-camera position="0 0 0" look-controls="enabled: false" cursor="fuse: false; rayOrigin: mouse;" raycaster="far: 10000; objects: .clickable"></a-camera>
<a-entity id="mytarget-one" mytarget-one mindar-image-target="targetIndex: 1">
<a-video id="paintandquest-video-link_wear" webkit-playsinline playsinline muted autoplay width="1" height="0.552" position="0 0 0.2" ></a-video>
<a-image id="paintandquest-preview-button" class="clickable" src="#paintandquest-preview" alpha-test="0.5" position="0 0 0.25" height="0.552" width="1">
</a-image>
<a-image id="linkedin" class="clickable" src="#icon-profile" position="-0.42 -0.5 0" height="0.15" width="0.15"
></a-image>
<a-image id="facebook" class="clickable" src="#icon-web" alpha-test="0.5" position="-0.14 -0.5 0" height="0.15" width="0.15"
></a-image>
<a-image id="web" class="clickable" src="#icon-email" position="0.14 -0.5 0" height="0.15" width="0.15"
></a-image>
<a-image id="contact" class="clickable" src="#icon-location" position="0.42 -0.5 0" height="0.15" width="0.15"
></a-image>
</a-entity>
I'm trying to use the targetFound
and targetLost
events to play/pause an audio clip. But even with autoplay set to false, I get the following message in the console as soon as the application is loaded. Any workaround to fix this?
The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page.
Is there any way to compile images with NodeJS? Just to run command in console or whatever
Hey, @hiukim how do images, that are mostly transparent backgrounds, work on mind-ar? On JsArtoolkit5 the background becomes black and then there are issues tracking the image on white or bright backgrounds.
I copied the example code here: https://hiukim.github.io/mind-ar-js-doc/quick-start/overview
When I ran it, I initially received crossorigin errors but fixed them by adding crossorigin="anonymous" to the scripts.
Now I'm seeing:
GET https://cdn.jsdelivr.net/gh/hiukim/[email protected]/dist/mindar-image.aframe.js net::ERR_ABORTED 404
Seems it's unable to load the mind ar a-frame resource?
Love the tool. Can't wait to try this new version. Thanks!
Hi, I found that when I call arsystem.stop after the image target found, the resulting object is not disappearing, Is there any way to reset the canvas after I stop the system?
Hi,
I'm using mind-ar with an a-video as asset with no additional overlay or interactivity.
Image target specs:
656x656
892KB size
Video asset specs:
9 seconds video length
mp4 format
320x320
30 fps
643KB size
The image tracker is spot-on but somewhat slow to follow the image target and have low fps except in iphone which have a good fps.
I've tested on devices:
Asus Zenfone Max Pro M2 (< 15 Fps) Chrome Browser
Xiaomi Redmi 8 (< 15 Fps) Chrome Browser
Samsung Galaxy A71 (< 20 Fps) Samsung Internet Browser
Samsung Galaxy A7 (< 10 Fps) Chrome Browser
Samsung Galaxy Tab 8 (< 10 Fps) Chrome Browser
Apple Iphone 7 plus (< 45 Fps) Safari Browser
Is there any way to improve fps? and tracker follow speed?
Thank you.
Hi @hiukim, I've tried a couple of images before trying out the default card.png
that is provided with the repo. In all the cases, the generated .mind
file isn't getting recognized. I also noticed 2 KB
size difference in the card.mind
that's included in the example and the one I generate with the same image.
Here is one of my images:
I am running file on mobile browser, but my mobile is connected to HMD-(Head Mounted Display) which has inbuilt camera.
I want the application to select camera from HMD rather than normal phone's camera.
Hi
I really like your Solution and it has helped me a lot.
I have one requirement where I can trigger functions based on the marker detected when multiple markers are being used. Collectively If a marker is detected or lost that I am able to do. Please help me with this
Hi @hiukim while i was testing your example https://hiukim.github.io/mind-ar-js/samples/example1.html i receive this error:
gpu-browser.js:14991 Uncaught (in promise) Error: Error compiling fragment shader: Fragment shader compilation failed.
ERROR: 0:2: '' : integer constant overflow
ERROR: 0:67: '' : integer constant overflow
ERROR: 2 compilation errors. No code generated.
�
at WebGLKernel.build (gpu-browser.js:14991)
at WebGLKernel.run (gpu-browser.js:18496)
at shortcut (gpu-browser.js:18516)
at Tracker._combineImageList (tracker.js:334)
at new Tracker (tracker.js:38)
at eval (controller.js:84)
of course nothing can be tracked or displayed...
Maybe my Android device is not supported? i tested with a Wiko View with Android 7.1.2 but i will test with another device.
EDIT: tested on Chrome browser 86.04240.110
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.