Giter Club home page Giter Club logo

ofxarkit's People

Contributors

2bbb avatar aferriss avatar cwervo avatar pierrextardif avatar robotconscience avatar sortasleepy avatar sortofsleepy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ofxarkit's Issues

Should change namepace ARCore

Probably ought to change this namespace at some point so people that might be working with Google's ARCore at the same time won't get confused.

Implement Scanning of Objects

Implement the ARObjectScanningConfiguration for an arkit session. This is it's own configuration type and should only be used for apps that are actively doing scanning (says apple).

In order to scan something you need to first define a boundary cube around it. We need a transform representing a local coordinate system of the object. An extent measuring the width, height, and depth of the object. Lastly, a center point relative to the origin of the transform. In the examples I've seen they use a gui with a semi transparent cube alongside the horizontal plane detector. This way you can position your cube directly on what you'd like to scan. Once you have all that you can call this very long method. That method should return the reference object that can be saved and used to detect the object within another app.

I think we should definitely include a sample app that just does scanning as a utility. The app should probably let users save both the reference object file and the usdz model generated during the scan (Though I haven't yet seen how to access the raw 3d information from the reference object).

Here's apple's sample app and another demo from unity

Add texture to Matte sample

I'm a little confused on how we can manipulate the sample shader in the new Matte example. I understand that we're getting the rgba values from the camera texture and passing in to the shader in Camera.h but I'm having trouble understanding how we implement that into our ofApp.mm file.

Any help would be greatly appreciated!

Implement Environment Texture Probe Anchor

Implement the ARKit 2.0 AREnvironmentProbeAnchor class. This is a new kind of anchor class that has an environment texture cube map associated with it.

It looks like there is an automatic mode where it will try to pick the best spot to create anchor / texture, as well as a manual mode where you can choose when and where to place the anchor or generate the texture.

The environment anchors have an environmentTexture property but unfortunately it is returned as a mtlTexture so it would likely need to be converted to opengl for any use within oF. I found some info here but didn't dig too deeply into this.

I think along with implementing this it might also be helpful to expose an easy way to use the cube map for reflections, or at least provide an example scene and shader showing how to do so.

Lastly here's an example of this being used in Unity

iPad Pro - drawPointCloud() draws grey screen.

Develop branch, commit 922a2a4

In example-basic, replacing
perform->draw()
with
perform->drawPointCloud()
produces a grey texture. Overlaying AR objects on top still works.

I tried directly calling
processor->pointCloud.setup(); in the main setup function and
processor->pointCloud.updatePointCloud(session.currentFrame); in the update function just to ensure the point cloud setup & update functions were being hit, but that didn't seem to affect anything.

Should I be expecting a different output from this function? Does drawPointCloud() need to be put after a camera matrix transform that I'm missing? Thanks in advance.

Semantic issue?

Hi. Getting some semantic issue error on build. Any thoughts?

screen shot 2017-08-11 at 12 38 05

example-planes fails for

My environment is
macOS High Sierra 10.13.2
Xcode 9.2
Ipad 12.9 2nd gen
iOs 11.2.5

Getting this error, but I have checked the privacy settings and the plist that we need to do for it to work. Other examples work out fine, but the example planes just doesn't workout.
Any advice where to look?

2018-02-17 17:50:05.696577+0900 example-planes[1935:1009465] [Session] Session (0x103f08b10): did fail with error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." UserInfo={NSLocalizedRecoverySuggestion=Make sure that the application has the required privacy settings., NSLocalizedDescription=Required sensor failed., NSLocalizedFailureReason=A sensor failed to deliver the required input.}

iphone x face tracking related changes

I will make a PR, just a record of things I'm noticing that need to be fixed:

a) this:

std::vector<FaceAnchorObject> faces;
needs to be public or we need an accessor

b) when we've already found a face, we still need to store the raw pointer:

 if(it != faces.end()){
                    
                    faces[index].raw = pa;

c) raw is actually an anchor (extends anchor) which contains a transform, so we can use it

 for (auto & face : processor->anchorController->faces){

            ofMatrix4x4 temp = ARCommon::toMat4(face.raw.transform);
            
            ofPushMatrix();
            ofMultMatrix(temp);
            for (auto & pt : face.vertices){
                 ofCircle(pt, 0.001);
            }
            ofPopMatrix();
        
    }

then we're in business:

screen shot 2018-01-21 at 12 06 10 pm

d) face.mesh seems empty at the moment -- I guess it needs to be constructed. vertices seem good

Tracking State

I haven't looked into this much yet, but it could be nice to provide a wrapper around the ARCamera tracking state.

Something like getTrackingState() might suffice. ARKit also provides reasons why tracking might be inhibited, which could be useful for debugging or providing user feedback.

Camera image texture not working?

I tried out example-basic, example-image-recognition, and example-face-tracking and while the tracking works well & virtual objects get drawn the camera feed in the app is gray for me & I get this warning in the XCode console:

Failed to create IOSurface image (texture)
2018-09-17 11:34:25.187191-0700 example-basic[2632:615621]

screen shot 2018-09-17 at 11 34 29 am


For reference, here is what example-basic looks like:

screenrecoding_arkit_gray_camera_texture

The OF logo planes appear & track correctly whenever I tap, but as you can see the rest of the scene is gray, not the camera image texture.

#define to disable face tracking

I think, (not 100% sure), that if your app includes arkit related code to face tracking then your app store submission needs to have a privacy policy or may be rejected. It might be good to have this as a #define so that it's easy to just comment out these symbols.

Examples aren't functional out-of-the-box

Currently, when either example-planes or example-lighting are built, the only things visible are the debug text and the camera input, with any geometry drawing beneath the processor->draw.

It seems to me that there is some kind of depth issue, preventing objects from drawing on top of the camera input.

Is this something that any of you are noticing, or could it in some way come down to my configuration? I'm new to iOS and ARKit, but not new to oF in general.

Implement ARImageTrackingConfiguration

Implement a new session configuration for ARImageTrackingConfiguration.

This new configuration tracks the world based on recognizing images rather than by observing device motion etc. The advantage of this seems to be that it's much faster, and you could potentially track a whole bunch of images all at once. The downside is that they have to be in front of the camera. This mode can only orient the device in the world if the camera can see a known reference image. Another upside to this that apple mentions is the ability to do AR inside of moving environments like train cars or airplanes.

Another property has also been added to the normal world tracking configuration, as well as this new one for capping the maximum number of tracked images. Probably not that important but worth documenting anyway.

My guess is that this feature is probably one of the easier ones to implement out of the ARKit 2.0 features, especially since some of this functionality has already been implemented in the most recent ofxARKit.

Lastly another unity video showing this in action

Differentiating anchors onscreen vs. offscreen

Thanks again for the killer ARKit addon :)

Question:
I'm trying to build something so that visuals (either animation or audio) trigger when the view sees them on screen. For example, a room might have 10 anchors plotted around the room, but only animate when the viewer seeings them on screen (eg. think the opposite of the stalking ghost in Mario).

Is there a good way to know which anchor points a user created are visible? I'm building from the "example-anchormanager" code and I assume the solution would be something in relation to the ofCamera? (line [48])

But any ideas/solutions would be great :)

Implement ARWorldMap

ARWorld map enables persistent AR (i.e. if you close the app and reopen in the same location it should still work, as well as multi-user AR. To my eyes it looks like the world map is essentially a copy of the AR session state.

It looks like in Apple's example they are using the multipeer connectivity framework to send the world map between devices. I found some work done on getting multi-peer going with oF but it has been inactive for a few years now.

My feeling is that the first step in this would be capturing and saving the world map to device for persistence. This method looks like it can grab the map if the world map status is adequate.

After a map is saved you can set a session's initialWorldMap property to load it up when a session begins.

As for the multipeer AR, it looks like you can send the map in the completion handler of getCurrentWorldMapWithCompletion, and upon receiving the map you can reinitialize your AR session with the new map. Apple recommends only sending the world map once since it is a resource heavy operation. You'd then need some additional methods for sharing and updating the anchors / objects placed in the world between users (i.e sendAnchor(ofVec3f anchor) ).

349 duplicate symbols for architecture arm64

I tried running the example-basic example but it fails with the following error:

349 duplicate symbols for architecture arm64

If I use PG to create an empty project with ofxARKit addon, I can successfully build and run the app on my iPad. But I don't know why I cannot build the examples included in the addon.

I would appreciate it if anyone can confirm this issue.

I use macOS 10.15.2 / Xcode 11.3.1

get world point in screen coords

One useful function I've been using a bunch seems missing from the library, wonder if you'd consider adding something like this in. I know it's pretty standard code, but I found it useful.

I know there is also a worldToScreen in ofCamera, I haven't tried it out, but maybe that would suffice?

ofVec2f worldToScreen(ofPoint worldPoint){
    ARCommon::ARCameraMatrices mats = processor->getCameraMatrices();
    
    ofVec4f p =  ofVec4f(worldPoint.x, worldPoint.y, worldPoint.z, 1.0);
    
    p = p * mats.cameraView;
    p = p * mats.cameraProjection;
    
    p /= p.w;
    
    // convert coords to 0 - 1
    p.x = p.x * 0.5 + 0.5;
    p.y = p.y * 0.5 + 0.5;

    // convert coords to pixels
    p.x *= ofGetWidth();
    p.y *= ofGetHeight();
    return ofVec2f(p.x, p.y);
}

add ofEnableDepthTest() to example

it's a small thing but the example-basic should have depth test turned on which it draws the OF logos and turned off before it draws the camera image and the 3d overlay text. by default OF has depth testing turned off. the depth buffer is enabled in main.mm but it's still not enough.

ie:

ofDisableDepthTest();
processor->draw();   // this is 2d drawing
ofEnableDepthTest();
if (session.currentFrame){
     ....  // draw 3d stuff
}
ofDisableDepthTest();
// more 2d drawing (overlay txt) 

description of manual steps if you use project generator

if you use the OF pg to update an ofxArKit project (for example, to update an example adding w/ an addon, etc) there are some settings that get overwritten -- it might be useful to describe them in the readme. off the top of my head --

  • set deployment target to 11.0
  • add "Privacy - Camera Usage Description" with a string to the plist
  • add Arkit framework to linked frameworks

I think steps for making a new project or updating an existing project should be documented -- the PG uses a template and I think some settings get overwritten when you update.

I'll take a look, there might be a way to pass in more specific settings so this stuff doesn't have to get changed manually. in the meantime, notes about this would help !

Building a face mesh?

I'm working on a minimal example for face tracking on the iPhone X. I made the changes @ofZach outlined in #32 but both of the following:

        face.rebuildFace();
        face.faceMesh.draw();
        mesh.addVertices(face.vertices);
        mesh.addTexCoords(face.uvs);
        mesh.addIndices(face.indices);
        mesh.draw();

produce this strange little mesh around the eye:

screen shot 2018-03-08 at 1 23 50 am

It seems like the face.indices vector is the problem here, as it only contains four numbers for me:

[1201, 44, 45, 1200]

By commenting out the call to addIndices & rendering a mesh of the vertices with OF_PRIMITVE_LINE_STRIP we can see the vertices order doesn't seem organized by nearest neighbor:

screen shot 2018-03-08 at 1 26 12 am

and by coloring (via HSB) the lines by their index, we can see this strange path even clearer:

screen shot 2018-03-08 at 1 29 07 am

Even this colored rendering gets collapsed down to the small square if I uncomment mesh.addIndices(face.indices);:

screen shot 2018-03-08 at 1 30 09 am

Any thoughts on why I'm getting such strange output? Thanks!


Also, as an aside, I noticed you're not going to be able to work heavily on this project anymore. I'd be happy to be granted collaborator access & start helping out, I've been digging into this project a lot recently & will be depending on it for prototyping for a while!

"No such file" error in build

I'm very new to oF, apologies if this is a basic config issue and thanks for your help.

On a fresh install of of_v0.9.8_ios_release, iphone 7+, iOS11

When I build example-basic, I get the following error:

clang: error: no such file or directory: '/Users/joshy/projects/of_v0.9.8_ios_release/apps/myApps/src/ARAnchorManager.mm'
clang: error: no input files

It's looking for this file one level up from the project. The project path is:
/Users/joshy/projects/of_v0.9.8_ios_release/apps/myApps/example-basic

The addon is at:
/Users/joshy/projects/of_v0.9.8_ios_release/addons/ofxARKit

Thank you!

ARKit API changes with Xcode 9 GM

Hi, I updated to the freshly-released Xcode 9 GM, then started getting build errors with the latest example apps. It looks like these 2 changes are needed due to ARKit API changes.

  1. ARCam.mm
    ARCameraMatrices ARCam::getMatricesForOrientation(UIInterfaceOrientation orientation,float near, float far){
        
        cameraMatrices.cameraView = convert<matrix_float4x4,ofMatrix4x4>([session.currentFrame.camera viewMatrixForOrientation:orientation]);
        
        // ARKit API has changed as of Xcode 9 GM
        //cameraMatrices.cameraProjection = convert<matrix_float4x4,ofMatrix4x4>([session.currentFrame.camera projectionMatrixWithViewportSize:viewportSize orientation:orientation zNear:near zFar:far]);
        cameraMatrices.cameraProjection = convert<matrix_float4x4,ofMatrix4x4>([session.currentFrame.camera projectionMatrixForOrientation:orientation viewportSize:viewportSize zNear:near zFar:far]);
        
        return cameraMatrices;
    }
  1. MyAppViewController.mm (you already had this marked as a todo)
- (void)loadView {
    ...

    // ARKit API has changed as of Xcode 9 GM
    // TODO should be ARWorldTrackingConfiguration now but not in current API(might need to re-download sdk)
    //ARWorldTrackingSessionConfiguration *configuration = [ARWorldTrackingSessionConfiguration new];
    ARWorldTrackingConfiguration *configuration = [ARWorldTrackingConfiguration new];

    ...
}

Thanks for this addon. Cheers.

Convert 2d screen point with depth information to 3d world point

cross posting from the forum.

I am trying to convert a 2d screen point with depth information from the camera to the Arkit world space. Like a point cloud for the depth image.

I have tried many approaches but I think I'm just thinking about it all wrong. I thought that I needed to unproject the point using the camera's projection, model view. I followed this ...
https://stackoverflow.com/questions/52461052/unprojecting-a-screen-point-to-a-horizontal-plane-in-arkit

trying on unproject the 2d point + depth

camera.begin();
processor->setARCameraMatrices();
           
auto projectionMat = processor->camera->getCameraMatrices().cameraProjection;
auto viewMat = processor->camera->getCameraMatrices().cameraView;
            
auto screenSize = ofxARKit::common::getDeviceDimensions();

// map to screen size
auto screenX = ofMap(x, 0, depthImage.getWidth(), 0, screenSize.x);
auto screenY = ofMap(y, 0, depthImage.getHeight(), 0, screenSize.y);

// ... for loop per pixel
            
ofMatrix4x4 inverse;
inverse.makeInvertOf(projectionMat * viewMat);

int index = y * int(depthImage.getWidth()) + x;

float px = (2.0 * screenX ) / screenSize.x - 1.0;
float py = 1.0 - (2.0 * screenY) / screenSize.y;

float depth = depthData[index];

ofVec4f inPoint(px, py, depth * 2.0 - 1.0, 1.0);

ofVec4f position = inverse * inPoint;
position.w = 1.0 / position.w;


ofVec3f point;
point.x = position.x * position.w;
point.y = position.y * position.w;
point.z = position.z * position.w;

ofPushMatrix();
ofTranslate(point);
// draw point here
ofPopMatrix();

camera.end();

I then tried multiplying by the cameras transform, got some results, but still wrong.

camera.begin();
processor->setARCameraMatrices();
   
// normalize point
auto screenX = ofMap(x, 0, depthImage.getWidth(), 0, 1);
auto screenY = ofMap(y, 0, depthImage.getHeight(), 0, 1);

matrix_float4x4 translation = matrix_identity_float4x4;
translation.columns[3].x = screenX;
translation.columns[3].y = screenY;
translation.columns[3].z = -depth;
            
matrix_float4x4 transform = matrix_multiply(session.currentFrame.camera.transform, translation);

ofPushMatrix();
ofMatrix4x4 mat = convert<matrix_float4x4, ofMatrix4x4>(transform);
ofMultMatrix(mat);
ofPushMatrix();
// draw point here
ofPopMatrix();

camera.end();

Any thoughts?

State of the ofxARKit union - I haven't forgotten ! But well, life. Also, how does a more general library sound?

Hello everyone!

With ARKit 3 around the corner - I just wanted to leave a quick note since I've seen a couple new stars, etc over the past couple months.

I know it's been awhile since there have been large updates to the library; unfortunately I was in New York for most of last year without access to a suitable Mac; hence the lack of pushes on my part. Thankfully there were people that put in time and energy and helped improve the library to where it is today or had clear ideas for how to evolve the plugin. ๐ŸŽ‰ ๐ŸŽ‰

While I am back home now in California and have my Mac laptop again, life has unfortunately been a bit of an issue. Basically I thought I had a job I could fall back on, turns out that didn't work out for reasons unknown to me - so I've been scrambling trying to find something to help pay the bills( if you're looking for a web dev - or better yet a Jr. creative coder well.. hint hint)

That being the case it's been difficult as I've been looking for work and/or trying to put out more work to better my chances.

I haven't forgotten about this plugin, hopefully it is working as intended and if there are any issues, I'm more than happy to try and help you work through any problems you might run into.

One idea I've been bouncing around is to try and turn this into a more general library, something that isn't explicitly tied to openFrameworks and to somehow shoe-horn in ARCore support as they appear to have a bridge to ARKit.
I started working on something briefly that compiles with CMake which could work but haven't made much progress yet.

I also really want to clean things up more and come up with a better way to push new changes to master. It is a bit messy and somewhat inefficient haha.

Anyways, that's about it.

In the meantime, I'm always happy to help or review PRs if you have any features you'd like to add.

File reorganization

As mentioned in #50 it would be nice to reorganize the files in the src folder of this project in a more meaningful way.

@sortofsleepy Did you have any thoughts on what this might look like? Following the template of ofxCv for instance would look like:

docs
example-*
libs
--ofxArkit
----include
------all the .h files from old src folder except ofxARKit.h
----src
------all the .mm files from old src folder

src
--ofxARKit.h

example-basic adding and removing anchors without input

Even when I comment out all instances of [session addAnchor:anchor]; and processor->anchorController->addAnchor(); I get anchors randomly spawning in the scene. The most I've ever seen is 2 at a time, and it seems to occasionally remove the second after some time. Weirdly, I only notice this happening when pointing my device at the floor at around a 45 degree angle.

Seems like after some time session.currentFrame.anchors.count; is changing on it's own.

I'm moving to NY for a bit! Looking for more collaborators as I may have to put this on pause for awhile

Hey all!

So some news happened for me over the holidays - looks like I'm moving back to NYC for a bit at the end of the month! I'll admit, I do have a bit of trepidation given the recent bomb cyclone โ„๏ธ , but excited nonetheless ๐Ÿ˜„.

My family still has Macs, but I recently switched to Windows this year for my personal computer - I've been able to keep working on this project every once in awhile thanks to the fact that the (soon to be ex) company that currently employees me gives everyone Macs and a device to test things on.

With me moving to NY - I obviously loose access to both those things ๐Ÿ˜ข which means I may not be able to update anything till at least May.

I'm hopeful that the place I'm gonna be working for will allow me to borrow a Mac from time to time but I'm not counting on it so I'm putting this notice up ahead of time in case anyone would like to help out with maintaining this repo.

Obviously I can't just add anyone to the collab list but as long as you seem like a pretty decent programmer and appear to have a understanding of OpenGL/WebGL/openFrameworks, I'll be happy to add you to the list if you're interested in helping.

Let me apologize ahead of time for the poor support you may receive if issues come up. If I think I can make changes without breaking anything I'll do so, but for the most part this project will likely get put on hold until I have access to a Mac and iPhone again.

In case you'd like to help but don't know where to start, one thing that could use work since I haven't gotten around to finding the time to put more effort into would be more examples.

I will of course still be more than happy to answer questions about the design of the addon(which could probably use some improvement on as well).

Thank you for your understanding. If you're in NYC and would like to hang out, please feel free to say hi too!

Twitchiness in examples

I noticed that there is some twitchiness to the planes that get added to the scene. I was able to especially trigger this by rotating my phone sideways into landscape orientation. When I do that their rotation flickers back and forth a bit. It also happens pretty often when you look at planes placed in the scene at more glancing angles.

It seems to happen to the planes at different times, which was curious to me because I thought that if it was something ARkit was doing based on the current camera position it should happen to all the planes at once.

Long story short, I think I tracked it down to the ofScale(0.0001) call. Not sure why this is happening but removing the call and just drawing my planes much smaller fixes it. So now when I do my draw I do something like this

for(int i = 0; i<mats.size(); i++){
        ofPushMatrix();
            ofMatrix4x4 tempMat;
            tempMat.set(mats[i].columns[0].x, mats[i].columns[0].y,mats[i].columns[0].z,mats[i].columns[0].w,
                        mats[i].columns[1].x, mats[i].columns[1].y,mats[i].columns[1].z,mats[i].columns[1].w,
                        mats[i].columns[2].x, mats[i].columns[2].y,mats[i].columns[2].z,mats[i].columns[2].w,
                        mats[i].columns[3].x, mats[i].columns[3].y,mats[i].columns[3].z,mats[i].columns[3].w);
        
            ofMultMatrix(tempMat);
        
            ofSetColor(255);
            ofRotate(90,0,0,1);
        
            //  magic numbers for scale
            float aspect = float(ofGetWidth()) / float(ofGetHeight());
            ofTranslate(0, 0, 0.21);
            img.draw(-aspect/8,-0.125,aspect/4,0.25);
        ofPopMatrix();
    }

There's got to be a better way to accurately figure out the real world screen size, but for now this works.

ARKit 2.0 (!)

Looks like it's that time again (or at least it was a couple weeks ago), I thought I would open this just as a starting point for discussing the new features in 2.0.

Here's a list of the newly added features and their accompanying guides / docs

  1. World map
    1-1. multi-user AR
    1-2. persistent tracking
  2. Environmental Reflections
  3. Object Detection and Scanning more on scanning here and detecting here and here and here
  4. A new image tracking mode

Also looks like some of these features will need the multipeer connectivity framework

There's also the new 3d file format "usdz", which might need a loader if people are interested in using it. It's not clear to me if some of these new features request models in that format but I imagine it wouldn't matter as long as you have the 3d data. I couldn't find much info on the specifics of usdz though.

The 3d scanning and detection sounds really cool and I'm super excited to see progress on this (I mean actually multi-user and persistence seems cool as well :D ). @sortofsleepy Would it be helpful for me to make some of these features individual issues for people to pluck off if they'd like to contribute? Should we shoot for implementing all of these or just some? Thoughts?

horizontal plane detection

it would be great to add detected horizontal planes to this addon -- arKit can give you info about planes as it finds them (https://blog.markdaws.net/arkit-by-example-part-2-plane-detection-visualization-10f05876d53) which might useful if you have content that specifically needs to appear on a floor or table, etc and for getting a base of reference.

I didn't look super hard into this but I have only seen examples that use scene kit that do this and I tried porting one of these over without much luck....

Implement blendShapes and lookAtPoint (gaze) for face detector

Blendshapes were added in 11.3 but I don't think ever made it into the repo. There's quite a few of them listed out here. However, there were some new one's added for 12.0 for the tongue. I don't have an X so unfortunately I can't test these out. I would think you can just pull these out of the faceAnchor object's .raw property but I haven't tried.

There's also a new lookAtPoint property of the faceAnchor that provides gaze direction. You can get individual eye transforms as well.

Since these are both just properties of faceAnchors, I'm not sure how it's best to handle implementing them. It might be nice to provide some constants where you could request a specific shape. Maybe something like:

if(anchor is faceAnchor){
 float tongueOut = getFeature(TONGUE); //returns float between 0 - 1

 float leftEyeBlink = getFeature(LEFT_EYE_BLINK) // 0.0 open, 1.0 closed
}

Reading Camera Pixels

Hello,
Been trying to read pixels from the camera first I thought I could read them in from getCameraTexture()

ofTexture tex = processor->getCameraTexture();
img.allocate(tex.getWidth(), tex.getHeight(), OF_IMAGE_COLOR);
tex.readToPixels(img);
img.update();

Looks like this is not working, I am getting a texture size of 4000x4000 as well. Without going down the road of trying read CVPixelBufferRef from the ARFrame, is there something I'm missing.

Whichever example I run I get "linker command failed with exit code 1"

My Xcode version is 10.1, I've run iOS apps in the past, but each ofxARKit example I try to run fails with the linker error.

The complete log is huge, but the last part looks like this:

Undefined symbols for architecture arm64:
  "_tessAddContour", referenced from:
      ofTessellator::tessellateToMesh(std::__1::vector<ofPolyline_<glm::tvec3<float, (glm::precision)0> >, std::__1::allocator<ofPolyline_<glm::tvec3<float, (glm::precision)0> > > > const&, ofPolyWindingMode, ofMesh_<glm::tvec3<float, (glm::precision)0>, glm::tvec3<float, (glm::precision)0>, ofColor_<float>, glm::tvec2<float, (glm::precision)0> >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
      ofTessellator::tessellateToPolylines(std::__1::vector<ofPolyline_<glm::tvec3<float, (glm::precision)0> >, std::__1::allocator<ofPolyline_<glm::tvec3<float, (glm::precision)0> > > > const&, ofPolyWindingMode, std::__1::vector<ofPolyline_<glm::tvec3<float, (glm::precision)0> >, std::__1::allocator<ofPolyline_<glm::tvec3<float, (glm::precision)0> > > >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_tessTesselate", referenced from:
      ofTessellator::performTessellation(ofPolyWindingMode, ofMesh_<glm::tvec3<float, (glm::precision)0>, glm::tvec3<float, (glm::precision)0>, ofColor_<float>, glm::tvec2<float, (glm::precision)0> >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
      ofTessellator::performTessellation(ofPolyWindingMode, std::__1::vector<ofPolyline_<glm::tvec3<float, (glm::precision)0> >, std::__1::allocator<ofPolyline_<glm::tvec3<float, (glm::precision)0> > > >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_tessGetVertexCount", referenced from:
      ofTessellator::performTessellation(ofPolyWindingMode, ofMesh_<glm::tvec3<float, (glm::precision)0>, glm::tvec3<float, (glm::precision)0>, ofColor_<float>, glm::tvec2<float, (glm::precision)0> >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_tessGetVertices", referenced from:
      ofTessellator::performTessellation(ofPolyWindingMode, ofMesh_<glm::tvec3<float, (glm::precision)0>, glm::tvec3<float, (glm::precision)0>, ofColor_<float>, glm::tvec2<float, (glm::precision)0> >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
      ofTessellator::performTessellation(ofPolyWindingMode, std::__1::vector<ofPolyline_<glm::tvec3<float, (glm::precision)0> >, std::__1::allocator<ofPolyline_<glm::tvec3<float, (glm::precision)0> > > >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_tessGetElements", referenced from:
      ofTessellator::performTessellation(ofPolyWindingMode, ofMesh_<glm::tvec3<float, (glm::precision)0>, glm::tvec3<float, (glm::precision)0>, ofColor_<float>, glm::tvec2<float, (glm::precision)0> >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
      ofTessellator::performTessellation(ofPolyWindingMode, std::__1::vector<ofPolyline_<glm::tvec3<float, (glm::precision)0> >, std::__1::allocator<ofPolyline_<glm::tvec3<float, (glm::precision)0> > > >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_tessDeleteTess", referenced from:
      ofTessellator::~ofTessellator() in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
      ofTessellator::ofTessellator(ofTessellator const&) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
      ofTessellator::operator=(ofTessellator const&) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_curl_easy_getinfo", referenced from:
      ofURLFileLoaderImpl::handleRequest(ofHttpRequest const&) in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "_curl_easy_strerror", referenced from:
      ofURLFileLoaderImpl::handleRequest(ofHttpRequest const&) in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "_curl_easy_cleanup", referenced from:
      ofURLFileLoaderImpl::ofURLFileLoaderImpl() in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "boost::filesystem::path::m_path_iterator_increment(boost::filesystem::path::iterator&)", referenced from:
      boost::filesystem::path::iterator::increment() in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "boost::filesystem::path::begin() const", referenced from:
      ofFilePath::makeRelative(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "boost::filesystem::path::operator/=(boost::filesystem::path const&)", referenced from:
      boost::filesystem::operator/(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofUtils.o)
      ofFilePath::makeRelative(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "_FreeImage_ConvertTo24Bits", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_GetWidth", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_Initialise", referenced from:
      ofInitFreeImage(bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_GetHeight", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "boost::filesystem::detail::create_directories(boost::filesystem::path const&, boost::system::error_code*)", referenced from:
      boost::filesystem::create_directories(boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "_FreeImage_FlipVertical", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_GetPitch", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_GetBits", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_GetFileTypeFromMemory", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, ofBuffer const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_LoadFromMemory", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, ofBuffer const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_GetImageType", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FT_Get_Char_Index", referenced from:
      ofTrueTypeFont::loadGlyph(unsigned int) const in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
      ofTrueTypeFont::getKerning(unsigned int, unsigned int) const in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "_FreeImage_IsTransparent", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_curl_easy_init", referenced from:
      ofURLFileLoaderImpl::ofURLFileLoaderImpl() in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "_FreeImage_ConvertTo32Bits", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_curl_easy_setopt", referenced from:
      ofURLFileLoaderImpl::handleRequest(ofHttpRequest const&) in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "_curl_easy_perform", referenced from:
      ofURLFileLoaderImpl::handleRequest(ofHttpRequest const&) in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "boost::filesystem::detail::canonical(boost::filesystem::path const&, boost::filesystem::path const&, boost::system::error_code*)", referenced from:
      boost::filesystem::canonical(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofUtils.o)
  "_FreeImage_CloseMemory", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, ofBuffer const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_FIFSupportsReading", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_GetFileType", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_uriFreeUriMembersA", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_Load", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_Unload", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, ofBuffer const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_curl_slist_append", referenced from:
      ofURLFileLoaderImpl::handleRequest(ofHttpRequest const&) in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "_uriUnixFilenameToUriStringA", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FT_Set_Char_Size", referenced from:
      ofTrueTypeFont::load(ofTrueTypeFontSettings const&) in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "_FreeImage_DeInitialise", referenced from:
      ofInitFreeImage(bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FT_Get_Kerning", referenced from:
      ofTrueTypeFont::getKerning(unsigned int, unsigned int) const in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "_tessNewTess", referenced from:
      ofTessellator::init() in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_FreeImage_GetBPP", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "boost::filesystem::path::end() const", referenced from:
      ofFilePath::makeRelative(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "boost::filesystem::path::root_directory() const", referenced from:
      boost::filesystem::path::has_root_directory() const in libofxiOS_iphoneos_Debug.a(ofUtils.o)
  "_FT_Load_Glyph", referenced from:
      ofTrueTypeFont::loadGlyph(unsigned int) const in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "_tessGetElementCount", referenced from:
      ofTessellator::performTessellation(ofPolyWindingMode, ofMesh_<glm::tvec3<float, (glm::precision)0>, glm::tvec3<float, (glm::precision)0>, ofColor_<float>, glm::tvec2<float, (glm::precision)0> >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
      ofTessellator::performTessellation(ofPolyWindingMode, std::__1::vector<ofPolyline_<glm::tvec3<float, (glm::precision)0> >, std::__1::allocator<ofPolyline_<glm::tvec3<float, (glm::precision)0> > > >&, bool) in libofxiOS_iphoneos_Debug.a(ofTessellator.o)
  "_uriParseUriA", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FreeImage_OpenMemory", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, ofBuffer const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "_FT_New_Face", referenced from:
      loadFontFace(boost::filesystem::path const&, FT_FaceRec_*&, boost::filesystem::path&) in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "_curl_slist_free_all", referenced from:
      ofURLFileLoaderImpl::handleRequest(ofHttpRequest const&) in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "boost::filesystem::detail::status(boost::filesystem::path const&, boost::system::error_code*)", referenced from:
      boost::filesystem::is_directory(boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofUtils.o)
      boost::filesystem::exists(boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
      boost::filesystem::is_regular_file(boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "boost::filesystem::path::compare(boost::filesystem::path const&) const", referenced from:
      boost::filesystem::path::compare(char const*) const in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
      boost::filesystem::operator==(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "_curl_global_init", referenced from:
      ofURLFileLoaderImpl::ofURLFileLoaderImpl() in libofxiOS_iphoneos_Debug.a(ofURLFileLoader.o)
  "boost::filesystem::absolute(boost::filesystem::path const&, boost::filesystem::path const&)", referenced from:
      of::priv::initutils() in libofxiOS_iphoneos_Debug.a(ofUtils.o)
      ofToDataPath(boost::filesystem::path const&, bool) in libofxiOS_iphoneos_Debug.a(ofUtils.o)
      ofFilePath::getAbsolutePath(boost::filesystem::path const&, bool) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
      ofFilePath::makeRelative(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "_FreeImage_GetColorType", referenced from:
      void putBmpIntoPixels<unsigned char>(FIBITMAP*, ofPixels_<unsigned char>&, bool) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "boost::filesystem::detail::current_path(boost::system::error_code*)", referenced from:
      boost::filesystem::current_path() in libofxiOS_iphoneos_Debug.a(ofUtils.o)
  "_FT_Done_Face", referenced from:
      ofTrueTypeFont::load(ofTrueTypeFontSettings const&) in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "boost::filesystem::path::parent_path() const", referenced from:
      ofFilePath::getEnclosingDirectory(boost::filesystem::path const&, bool) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "boost::system::system_category()", referenced from:
      ___cxx_global_var_init.2 in MyAppViewController.o
      ___cxx_global_var_init.2 in main.o
      ___cxx_global_var_init.2 in ofApp.o
      ___cxx_global_var_init.2 in ARAnchorManager.o
      ___cxx_global_var_init.2 in Camera.o
      ___cxx_global_var_init.2 in ARProcessor.o
      ___cxx_global_var_init.2 in ARCam.o
      ...
  "_FT_Init_FreeType", referenced from:
      ofTrueTypeFont::initLibraries() in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "_FT_Outline_Decompose", referenced from:
      makeContoursForCharacter(FT_FaceRec_*) in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "boost::filesystem::detail::create_directory(boost::filesystem::path const&, boost::system::error_code*)", referenced from:
      boost::filesystem::create_directory(boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "boost::filesystem::path::operator/=(char const*)", referenced from:
      ofFilePath::makeRelative(boost::filesystem::path const&, boost::filesystem::path const&) in libofxiOS_iphoneos_Debug.a(ofFileUtils.o)
  "_FT_Render_Glyph", referenced from:
      ofTrueTypeFont::loadGlyph(unsigned int) const in libofxiOS_iphoneos_Debug.a(ofTrueTypeFont.o)
  "_FreeImage_GetFIFFromFilename", referenced from:
      bool loadImage<unsigned char>(ofPixels_<unsigned char>&, boost::filesystem::path const&, ofImageLoadSettings const&) in libofxiOS_iphoneos_Debug.a(ofImage.o)
  "boost::system::generic_category()", referenced from:
      ___cxx_global_var_init in MyAppViewController.o
      ___cxx_global_var_init.1 in MyAppViewController.o
      ___cxx_global_var_init in main.o
      ___cxx_global_var_init.1 in main.o
      ___cxx_global_var_init in ofApp.o
      ___cxx_global_var_init.1 in ofApp.o
      ___cxx_global_var_init in ARAnchorManager.o
      ...
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Maybe the problem is ARKit itself, I don't know, I was just wonder whether there's some obvious fix to this problem.
Thanks

Face ID? Yay? Nay?

To anyone that might be paying attention :

Now that iPhone X is out(or sort of getting to folks) - is using the front facing camera of any concern to anyone? I won't be getting one (I just can't fathom paying 1k for a phone) but the adjustments to accommodate Face ID seem pretty straightforward I could probably add that in with little trouble(PRs are great too!).

If no one responds I'll just assume it's not important! ๐Ÿ˜›

Something wrong with updatePlanes

Hey I just pulled from master and noticed that a few seconds after running my app it begins to crash. I did a stack trace and it seems like the error is coming from pushing new planes into the planes vector. Logging the size of that vector shows that it quickly reaches into the hundreds of thousands or millions...which would explain the crash.

I did some poking around but I couldn't seem to pinpoint the specific commit where this changed. I switched back over to develop and this doesn't happen, so maybe whatever changed between them will fix.

screen shot 2017-09-13 at 3 18 26 pm
screen shot 2017-09-13 at 3 38 56 pm

ARKit 1.5 new features

Looks like arkit is getting some nice updates! Supposedly it can now do image tracking (a la vuforia) as well as vertical / wall detection. Seems that they are also up-ing the allowed camera resolution as well as unlocking an autofocus feature (though I can't find docs for this...).

Just opening this issue here as a todo / place for discussion about implementing these features. It's still just in developer beta for now, but supposedly be coming with iOS 11.3 in the Spring.

iPad Pro - Camera feed is rotated 180 deg.

Develop branch, commit 922a2a4

In all of the examples, when running on one of the new iPad pros, the ARKit pieces function properly, IE snapping a photo in the photocapture example leaves a plane that appears to stay in the same location when you move away, however the entire camera feed is rotated 180 degrees. If I rotate the the iPad orientation 180 degrees and lock it in the control panel, the camera feed is rotated properly, but then none of the AR overlays are available (It seems like it's rendering off screen).

In example-photocapture/src/ofApp.mm changing line 70 from
ofRotate(90,0,0,1);
to
ofRotate(270,0,0,1);
fixes the orientation of the photo captures on the textures when the iPad is locked to default orientation, but I still can't seem to use an ofRotate before processor->drawFrame() to make the live feed of the camera rotate as expected.

Mystery Framerate drop in newer version

I'm not sure at what point this happened, but somewhere along the way I noticed the addon started getting a worse framerate. You can test this by building the most recent, which on an iPhone SE gave me around 40fps, and then building an older (I tried 2b98adc) and you'll get a smooth 60fps.

Will try and do some more investigating, but wanted to see if anyone else had noticed this or if I'm just imagining things...

z-fighting on depth test

I don't know if this is specifically a ofxARKit thing or an ofxIOS thing but I get a lot of z-fighting on overlapped graphics as I move away from them (it's much better when I move closer). I tried switching up the renderer as I had to switch to es1 for this example, since I was reading the screen back to textures and I couldn't get that to work on es2:

https://twitter.com/zachlieberman/status/897945041573879808

without much luck in seeing z-depth fighting improve.

as I was making this issue, I just noticed the this near and far values here:

https://github.com/sortofsleepy/ofxARKit/blob/develop/src/ARProcessor.h#L113

and will experiment to see if there's ways to improve this with adjusting these values, etc (I am using older code which is closer to the example I posted in a previous issue)

Implement Object Detection and Tracking

Implement functions to detect and track an object based on a previously made reference scan.

Firs the reference object must be loaded as an ARReference Object. Supposedly you can pass a number of these to the ARSession's configuration to a detectionObjects property.

The reference objects need to be added to your asset catalog before they can be loaded. There are two loader functions, one loads from a url and one loads all objects that are in the code bundle.

If ARKit recognizes an object in the scene it will add a new type of anchor called an ARObjectAnchor. The anchors have their own transform and other properties that you can inspect.

It's a little unclear to me how to display the 3d scanned object, if you even can. I did see that the ARReferenceObjects have a rawFeaturePoints property, so you can at least get at the point cloud data.

I think it would also be really good if we could provide an example of loading a reference object file into an app and drawing something in it's transform. It'd be great if we could create the scan of something common that lots of people might just have laying around (a banana? can of soda?).

getting crashes in ARProcessor::addAnchor

I was trying to add anchors but having issues with crashes in ARProcessor::addAnchor.

also, if it's helpful, I made a small test program using your code (and whatever I could get to work) to see something move --

https://twitter.com/zachlieberman/status/895404101936308224
https://dl.dropboxusercontent.com/u/92337283/misc/arkit-tests_doingSomething.zip

it's pretty messy, but there's some functions you might find useful like getting the modelview and projection matrix out and converting to OF style matrices, etc. I'll try to to clean it up shortly...

Example ideas

It's just a personal preference but I am not super comfortable with the photo capture example being part of the examples of this addon. I think examples should help people understand the API or what's possible but not necessarily help copy an aesthetic style. It's just a preference -- it's your addon and I'm super appreciative of your work on this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.