Giter Club home page Giter Club logo

lumina's Introduction

Travis CI Status MIT License Standard README Compliant Platforms


Would you like to use a fully-functional camera in an iOS application in seconds? Would you like to do CoreML image recognition in just a few more seconds on the same camera? Lumina is here to help.

Cameras are used frequently in iOS applications, and the addition of CoreML and Vision to iOS 11 has precipitated a rash of applications that perform live object recognition from images - whether from a still image or via a camera feed.

Writing AVFoundation code can be fun, if not sometimes interesting. Lumina gives you an opportunity to skip having to write AVFoundation code, and gives you the tools you need to do anything you need with a camera you've already built.

Lumina can:

  • capture still images
  • capture videos
  • capture live photos
  • capture depth data for still images from dual camera systems
  • stream video frames to a delegate
  • scan any QR or barcode and output its metadata
  • detect the presence of a face and its location
  • use any CoreML compatible model to stream object predictions from the camera feed

Table of Contents

Requirements

  • Xcode 12.0+ (by loading Swift 4 Toolchain)
  • iOS 13.0
  • Swift 5.2

Background

David Okun has experience working with image processing, and he thought it would be a nice thing to have a camera module that allows you to stream images, capture photos and videos, and have a module that lets you plug in a CoreML model, and it streams the object predictions back to you alongside the video frames.

Contribute

See the contribute file!

PRs accepted.

Small note: If editing the README, please conform to the standard-readme specification.

Install

Lumina fully supports Swift Package Manager. You can either add the repo url in your Xcode project or in your Package.swift file under dependencies.

Usage

NB: This repository contains a sample application. This application is designed to demonstrate the entire feature set of the library. We recommend trying this application out.

Initialization

Consider that the main use of Lumina is to present a ViewController. Here is an example of what to add inside a boilerplate ViewController:

import Lumina

We recommend creating a single instance of the camera in your ViewController as early in your lifecycle as possible with:

let camera = LuminaViewController()

Presenting Lumina goes like so:

present(camera, animated: true, completion:nil)

Remember to add a description for Privacy - Camera Usage Description and Privacy - Microphone Usage Description in your Info.plist file, so that system permissions are handled properly.

Logging

Lumina allows you to set a level of logging for actions happening within the module. The logger in use is swift-log, made by the Swift Server Working Group team. The deeper your level of logging, the more you'll see in your console.

NB: While Lumina is licensed by the MIT license, swift-log is licensed by Apache 2.0. A copy of the license is also included in the source code.

To set a level of logging, set the static var on LuminaViewController like so:

LuminaViewController.loggingLevel = .notice

Levels read like so, from least to most logging:

  • CRITICAL
  • ERROR
  • WARNING
  • NOTICE
  • INFO
  • DEBUG
  • TRACE

Functionality

There are a number of properties you can set before presenting Lumina. You can set them before presentation, or during use, like so:

camera.position = .front // could also be .back
camera.recordsVideo = true // if this is set, streamFrames and streamingModel are invalid
camera.streamFrames = true // could also be false
camera.textPrompt = "This is how to test the text prompt view" // assigning an empty string will make the view fade away
camera.trackMetadata = true // could also be false
camera.resolution = .highest // follows an enum
camera.captureLivePhotos = true // for this to work, .resolution must be set to .photo
camera.captureDepthData = true // for this to work, .resolution must be set to .photo, .medium1280x720, or .vga640x480
camera.streamDepthData = true // for this to work, .resolution must be set to .photo, .medium1280x720, or .vga640x480
camera.frameRate = 60 // can be any number, defaults to 30 if selection cannot be loaded
camera.maxZoomScale = 5.0 // not setting this defaults to the highest zoom scale for any given camera device

Object Recognition

NB: This only works for iOS 11.0 and up.

You must have a CoreML compatible model(s) to try this. Ensure that you drag the model file(s) into your project file, and add it to your current application target.

The sample in this repository comes with the MobileNet and SqueezeNet image recognition models, but again, any CoreML compatible model will work with this framework. Assign your model(s) to the framework using the convenient class called LuminaModel like so:

camera.streamingModels = [LuminaModel(model: MobileNet().model, type: "MobileNet"), LuminaModel(model: SqueezeNet().model, type: "SqueezeNet")]

You are now set up to perform live video object recognition.

Handling output

To handle any output, such as still images, video frames, or scanned metadata, you will need to make your controller adhere to LuminaDelegate and assign it like so:

camera.delegate = self

Because the functionality of the camera can be updated at runtime, all delegate functions are required.

To handle the Cancel button being pushed, which is likely used to dismiss the camera in most use cases, implement:

func dismissed(controller: LuminaViewController) {
    // here you can call controller.dismiss(animated: true, completion:nil)
}

To handle a still image being captured with the photo shutter button, implement:

func captured(stillImage: UIImage, livePhotoAt: URL?, depthData: Any?, from controller: LuminaViewController) {
        controller.dismiss(animated: true) {
    // still images always come back through this function, but live photos and depth data are returned here as well for a given still image
    // depth data must be manually cast to AVDepthData, as AVDepthData is only available in iOS 11.0 or higher.
}

To handle a video being captured with the photo shutter button being held down, implement:

func captured(videoAt: URL, from controller: LuminaViewController) {
    // here you can load the video file from the URL, which is located in NSTemporaryDirectory()
}

NB: It's import to note that, if you are in video recording mode with Lumina, streaming frames is not possible. In order to enable frame streaming, you must set .recordsVideo to false, and .streamFrames to true.

To handle a video frame being streamed from the camera, implement:

func streamed(videoFrame: UIImage, from controller: LuminaViewController) {
    // here you can take the image called videoFrame and handle it however you'd like
}

To handle depth data being streamed from the camera on iOS 11.0 or higher, implement:

func streamed(depthData: Any, from controller: LuminaViewController) {
    // here you can take the depth data and handle it however you'd like
    // NB: you must cast the object to AVDepthData manually. It is returned as Any to maintain backwards compatibility with iOS 10.0
}

To handle metadata being detected and streamed from the camera, implement:

func detected(metadata: [Any], from controller: LuminaViewController) {
    // here you can take the metadata and handle it however you'd like
    // you must find the right kind of data to downcast from, whether it is of a barcode, qr code, or face detection
}

To handle the user tapping the screen (outside of a button), implement:

func tapped(from controller: LuminaViewController, at: CGPoint) {
    // here you can take the position of the tap and handle it however you'd like
    // default behavior for a tap is to focus on tapped point
}

To handle a CoreML model and its predictions being streamed with each video frame, implement:

func streamed(videoFrame: UIImage, with predictions: [LuminaRecognitionResult]?, from controller: LuminaViewController) {
  guard let predicted = predictions else {
    return
  }
  var resultString = String()
  for prediction in predicted {
    guard let values = prediction.predictions else {
      continue
    }
    guard let bestPrediction = values.first else {
      continue
    }
    resultString.append("\(String(describing: prediction.type)): \(bestPrediction.name)" + "\r\n")
  }
  controller.textPrompt = resultString
}

Note that this returns a class type representation associated with the detected results. The example above also makes use of the built-in text prompt mechanism for Lumina.

Changing the user interface

To adapt the user interface to your needs, you can set the visibility of the buttons by calling these methods on LuminaViewController:

camera.setCancelButton(visible: Bool)
camera.setShutterButton(visible: Bool)
camera.setSwitchButton(visible: Bool)
camera.setTorchButton(visible: Bool)

Per default, all of the buttons are visible.

Adding your own controls outside the camera view

For some UI designs, apps may want to embed LuminaViewController within a custom View Controler, adding controls adjacent to the camera view rather than putting all the controls inside the camera view.

Here is a code snippet that demonstrates adding a torch buttons and controlling the camera zoom level via the externally accessible API:

class MyCustomViewController: UIViewController {
    @IBOutlet weak var flashButton: UIButton!
    @IBOutlet weak var zoomButton: UIButton!
    var luminaVC: LuminaViewController? //set in prepare(for segue:) via the embed segue in the storyboard
    var flashState = false
    var zoomLevel:Float = 2.0
    let flashOnImage = UIImage(named: "Flash_On") #assumes an image with this name is in your Assets Library
    let flashOffImage = UIImage(named: "Flash_Off") #assumes an image with this name is in your Assets Library

    override public func viewDidLoad() {
        super.viewDidLoad()

        luminaVC?.delegate = self
        luminaVC?.trackMetadata = true
        luminaVC?.position = .back
        luminaVC?.setTorchButton(visible: false)
        luminaVC?.setCancelButton(visible: false)
        luminaVC?.setSwitchButton(visible: false)
        luminaVC?.setShutterButton(visible: false)
        luminaVC?.camera?.torchState = flashState ? .on(intensity: 1.0) : .off
        luminaVC?.currentZoomScale = zoomLevel
    }

    override public func prepare(for segue: UIStoryboardSegue, sender: Any?) {
    if segue.identifier == "Lumina" { #name this segue in storyboard
            self.luminaVC = segue.destination as? LuminaViewController
        }
    }

    @IBAction func flashTapped(_ sender: Any) {
        flashState = !flashState
        luminaVC?.camera?.torchState = flashState ? .on(intensity: 1.0) : .off
        let image = flashState ? flashOnImage : flashOffImage
        flashButton.setImage(image, for: .normal)
    }

    @IBAction func zoomTapped(_ sender: Any) {
        if zoomLevel == 1.0 {
            zoomLevel = 2.0
            zoomButton.setTitle("2x", for: .normal)
        } else {
            zoomLevel = 1.0
            zoomButton.setTitle("1x", for: .normal)
        }
        luminaVC?.currentZoomScale = zoomLevel
    }

Maintainers

  • David Okun Twitter Follow GitHub followers
  • Richard Littauer Twitter Follow GitHub followers
  • Daniel Conde Twitter Follow GitHub followers
  • Zach Falgout Twitter Follow GitHub followers
  • Gerriet Backer Twitter Follow GitHub followers
  • Greg Heo Twitter Follow GitHub followers

License

MIT © 2019 David Okun

lumina's People

Contributors

cs4alhaider avatar dconde7 avatar dokun1 avatar drmarkpowell avatar gerriet avatar gregheo avatar mpowell avatar richardlitt avatar zfalgout avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lumina's Issues

allow public accessors to torch state and zoom level (get and set)

Is your feature request related to a problem? Please describe.
For our app, we would like to run Lumina as an embedded view controller within a custom UIVC and control its torch and zoom level via our own external UI buttons or similar.

Describe the solution you'd like
In addition to the LuminaViewController internal buttons, we would like to be able to set/get the LuminaCamera torch state and current zoom level programatically.

Describe alternatives you've considered
We considered using the existing LuminaViewController buttons but we're embedding the Lumina VC in a small space and the internal buttons are occluding the camera view too much (we are scanning barcodes with it).

Additional context
We are using Lumina for this app mainly to scan barcodes, within a custom UIViewController that presents a few other custom controls for related things.

Would making the LuminaCamera public be appropriate, or would adding additional set/get method to LuminaViewController be more acceptable? I'd be happy to fire a PR your way.

Iphone X support

Please adjust the UI Buttons to respect the safemargins for iPhoneX.

Thanks

Show visual feedback for detected metadata

This was done in a much earlier release of Lumina, but instead of just drawing a rectangle around it, we should do what the amazon app does, and draw a bunch of indiscriminate random points inside of the rect to show that something is detected.

Flash

Describe the bug
When tapping the torch button causes the app to crash with the error of flashMode must be set to a value present in the supportedFlashModes array.

To Reproduce
Steps to reproduce the behavior:

  1. Show LuminaController (with torch not hidden)
  2. Tap torch button (either turning it on or using the auto)
  3. Tap shutter button (the app will crash)

Expected behavior
It should take a picture with the flash on.

Smartphone (please complete the following information):
iPad 2017 (12.1)

AVCaptureOutputDataSynchronizer

There is a way to synchronize the return of data from multiple AVCaptureOutput objects with this object. My opinion is that Lumina should have an option to enable synchronized return via this object, or to let outputs stream asynchronously on their own.

Remove Background

Can we have a feature of removing background around the detected image or images

Use Vision for FaceDetection if iOS 11.0+

The way to do this before with CIFaceDetector is outdated, and this will pave the way for more functionality in the future.

Should probably also add:

  • optional rect drawing
  • returning coordinates of bounding rect
  • return more details from detection

Depth data not delivered to captured delegate

In the current release (1.2.1) and also in the actual version of the repository, it is not possible to capture depth images in the captured delegate.

Expected Behavior

If camera.captureDepthData = true is set, in the captured delegate the depth data should not be null:

if let data = depthData as? AVDepthData {
	guard let depthImage = data.depthDataMap.normalizedImage(with: controller.position) else {
	    print("could not convert depth data")
	    return
	}
}

Current Behavior

The depth data is always not set (Optional<Any>).

Steps to Reproduce (for bugs)

Check out the ViewController.swift of my test project.

Your Environment

Tested on iPhone X.

Persistent Memory Leaks

Something is leaking in AVCaptureMetadataOutput, and it's been a minute since I tackled memory leaks fully on iOS.

To Reproduce:

  • present LuminaViewController
  • dismiss it
  • present another one

I'm planning to dive into this tomorrow (Thursday the 19th) but if anyone wants to take a look...

Stream AVCaptureDepthDataOutput

Just like other outputs, AVDepthData should be streamed from the camera if there is an option enabled for it, and other settings for it meet the requirements. This should come from a separate delegate

DepthData for iPhone X selfie camera

Whenever the AVCaptureDevice available is of type .builtInTrueDepthCamera, and depth data capture is enabled, it should use this camera. This is for, currently, the use case of the iPhone X selfie camera.

AVCaptureDevice setActiveColorSpace

Describe the bug
My app crashes after calling self.present(camera, animated: true, completion: nil) twice.

To Reproduce
Steps to reproduce the behavior:
Call self.present(camera, animated: true, completion: nil) twice the app will crash

Expected behavior
Should not crash when re-opening the camera.

Screenshots
image

Smartphone (please complete the following information):

  • Device: iPad Pro 11inch
  • OS: 12.3
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

`streamed` not being called

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Check console. Never get "streamed" message. Never get any text on the screen (unless I uncomment the "testing" one.

import UIKit
import AVFoundation
import Lumina

class MainVC: UIViewController {
    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        
        LuminaViewController.loggingLevel = .verbose
        
        let camera = LuminaViewController()
        camera.delegate = self
        
        if #available(iOS 11.0, *) {
            camera.streamFrames = true
            camera.textPrompt = ""
            camera.trackMetadata = true
            camera.streamingModels = [MobileNet(), Inceptionv3()]
        } else {
            print("Warning: this iOS version doesn't support CoreML")
        }
        
        // camera.textPrompt = "testing"
        present(camera, animated: true, completion: nil)
    }
}

extension MainVC: LuminaDelegate {
    func dismissed(controller: LuminaViewController) {
        controller.dismiss(animated: true, completion: nil)
    }
    
    func streamed(videoFrame: UIImage, with predictions: [LuminaRecognitionResult]?, from controller: LuminaViewController) {
        print("streamed")
        if #available(iOS 11.0, *) {
            guard let predicted = predictions else {
                return
            }
            var resultString = String()
            for prediction in predicted {
                guard let values = prediction.predictions else {
                    continue
                }
                guard let bestPrediction = values.first else {
                    continue
                }
                resultString.append("\(String(describing: prediction.type)): \(bestPrediction.name)" + "\r\n")
            }
            controller.textPrompt = resultString
        } else {
            print("Warning: this iOS version doesn't support CoreML")
        }
    }
}

Expected behavior
Followed YouTube video. It should show predictions on screen. Currently it shows no text, and doesn't debug to console that it even hit that function. dismissed works though.

Screenshots
If applicable, add screenshots to help explain your problem.

Smartphone (please complete the following information):

  • Device: iPhone 6S
  • OS: iOS11.4
  • Version: 1.3.1 (with the edits from my PR)

Additional context
Add any other context about the problem here.

Updated UI

The UI needs an update. Given all the options available in Lumina, there should be a way to change all of these options from the UI of the camera. Perhaps a way to opt into a "Simple UI" if you just want to:

  • switch cameras
  • toggle flash
  • dismiss the camera
  • capture a photo

Choosing aspect ratio

I want to be able to choose the aspect ratio for photo and video and not have have the camera use the wrong one and then just crop it.

just a settings (formats) available for photo and for video

camera does not return to continuous autofocus after tap to focus

Describe the bug
camera does not return to continuous autofocus after tap to focus
In the FocusHandlerExtension:
func resetCameraToContinuousExposureAndFocus() {

the code checks for whether continuous autofocus is supported, then locks config, set it to autofocus (not continuous), and unlocks. Looks like a simple typo.

To Reproduce
Steps to reproduce the behavior:
Tap to autofocus on a point of interest, wait one second.

Expected behavior
It should return to continuous autofocus mode after one second

Screenshots

Smartphone (please complete the following information):
iPhone X, iOS 13.1
iPhone Xs Max, iOS 12.2

Additional context
I will submit a PR with the one-line fix.

Depth image normalization seems to loose details

I am currently trying to capture both, the color image and the depth image. This is working, but the depth image looks really bad. The nearer areas are all blown out, sometimes the whole image is white.

The depth stream instead looks like how the death image should look like. I use the same method to convert the CVPixelBuffer into a grayscale image (the one you are using in the example).

I am missing something there, or why are these depth images not normalised the same way? I also tried to capture a depth image with the Halide App and they seem much better in depth resolution then the one I can capture / stream with lumina. Are there other ways to capture depth images or normalize them?

Here is the code I use to normalize the depth images:

func captured(stillImage: UIImage, livePhotoAt: URL?, depthData: Any?, from controller: LuminaViewController) {
	// save color image
	CustomPhotoAlbum.shared.save(image: stillImage)

	// save depth image if possible
	print("trying to save depth image")
	if #available(iOS 11.0, *) {
	    if var data = depthData as? AVDepthData {
	        
	        // be sure its DisparityFloat32
	        if data.depthDataType != kCVPixelFormatType_DisparityFloat32 {
	            data = data.converting(toDepthDataType: kCVPixelFormatType_DisparityFloat32)
	        }

	        guard let depthImage = data.depthDataMap.normalizedImage(with: controller.position) else {
	            print("could not convert depth data")
	            return
	        }

	        ...
extension CVPixelBuffer {
    func normalizedImage(with position: CameraPosition) -> UIImage? {
        let ciImage = CIImage(cvPixelBuffer: self)
        let context = CIContext(options: nil)
        if let cgImage = context.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(self), height: CVPixelBufferGetHeight(self))) {
            return UIImage(cgImage: cgImage , scale: 1.0, orientation: getImageOrientation(with: position))
        } else {
            return nil
        }
    }
    
    private func getImageOrientation(with position: CameraPosition) -> UIImageOrientation {
        switch UIApplication.shared.statusBarOrientation {
        case .landscapeLeft:
            return position == .back ? .down : .upMirrored
        case .landscapeRight:
            return position == .back ? .up : .downMirrored
        case .portraitUpsideDown:
            return position == .back ? .left : .rightMirrored
        case .portrait:
            return position == .back ? .right : .leftMirrored
        case .unknown:
            return position == .back ? .right : .leftMirrored
        }
    }
}

Here are the three images:

Color Image

img_3517

Depth Image from the captured method

img_3518

Depth image from the streamed method

img_3519

Haptic feedback for video recording not working

When trying to record video, if enabled, the button should give haptic feedback if held down. This does not work.

However, if the button is held down and released if video recording mode is enabled, the error feedback works.

The feedback generators can be accessed from LuminaDeviceUtil.swift

Configurable exposure

Much like the iOS camera, there should be a way to change the lens exposure from the UI of the camera.

.xcworkspace doesn't work on download

When you clone the project, the workspace doesn't compile the sample app out of the box. In order to get this working on your machine, you currently need to:

  1. Clone the project.
  2. Open the .xcworkspace file.
  3. Delete the red-text framework file reference in the sample app folder "Frameworks".
  4. Build the framework scheme.
  5. Drag Lumina.framework from the built products directory in your framework project file to the sample project file.
  6. Go to the LuminaSample project file, and add Lumina.framework into Embedded Binaries.
  7. Run the LuminaSample app.

This works, and is normal to an extent for working with these kinds of projects, but it's not a happy path for those who want to try the framework for the first time.

It would be good if the .xcworkspace file worked out of the box when the project is cloned. I have experimented with two things:

  1. Changing the BUILT_PRODUCTS_DIR to something static that the sample project will recognize.
  2. Hardcoding the path to DerivedData where the built framework is placed.

Neither of these worked, or rather I couldn't figure them out.

Help?

Pinch to Zoom

You should be able to make a pinching gesture to zoom in on the viewfinder at any time.

Custom video resolution

The camera currently defaults to 1080p, or the highest resolution possible for the device it's being used on. The developer should be able to specify a resolution, based on what is available in AVFoundation, and the camera should attempt to load this resolution, or fall back to the highest possible incase of failure.

Add X CoreML Models instead of just one CoreML Model

It's possible that you might want to process frames from the camera through multiple CoreML models. Currently you use a model by doing something like:

camera.streamingModel = MobileNet().model

Instead, you should be able to do something like:

camera.streamingModels: [MLModel] = [MobileNet().model, InceptionV3().model]

And then the delegate function would look something like:

func streamed(videoFrame: UIImage, with predictions: [MLModel : [LuminaPrediction]]?, from controller: LuminaViewController)

But the map of returned predictions would only include the class type instead of an instance. This might sound ignorant of me, but is there a way to return a class type to parse by instead of the instance?

Implement Danger

In lieu of decent unit tests for the time being, it would be good to have some help reviewing code. Danger can do that. We should implement CI that uses this.

Custom icons

Is your feature request related to a problem? Please describe.
Would it be possible to change the icons?

Describe the solution you'd like
A way to provide custom icons for the capture/cancel/switch/torch icons. Either using swifticonfont? Or pure assets

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

ARKit Face Recognition Integration

With the iPhone X, you can capture a very granular 3d map of someones face from the selfie camera. It should be possible to stream this map in realtime from the camera if a setting for this is on.

Future plans?

and (eventually) have a module that lets you plug in a CoreML model, and it streams the object predictions back to you alongside the video frames.

What's next?

Improve Documentation

The public API needs code comments, and a set of docs (like jazzy) should be generated somewhere for consumption.

Crash when camera usage description is not set.

This crash will occur if you don't set the camera usage description in your Info.plist.

This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSCameraUsageDescription key with a string value explaining to the user how the app uses this data.

Fix this by adding a NSCameraUsageDescription to your Info.plist.

Custom frame rate

The camera currently operates at 30 FPS. To prepare for eventual video recording capabilities, the live preview should be variable to any FPS, so long as the set resolution can handle it as well.

Custom CoreML Model

This is aimed at making working with CoreML easier, and the plan is this:

Make an open stored property for a CoreML model. If this is specified, the developer also has to specify the size of the CVPixelBuffer that it will take in, unless this can somehow be read internally.

If the developer inputs a model, the video frames should stream back through the delegate, but with a set of predictions returned with each frame.

Ideally, the developer experience should be as simple as:

camera.mlModel = MLModel()

And Lumina should do the rest.

iPhone X support

Im sorry if this isn't a problem for this library but I honestly don't know where else to turn. I am having a major issue with my custom camera. Pretty much the zoom on my camera when using any of the iPhone X's is way to close. I used your library and noticed that your camera doesn't have this same effect and was wondering if anyone here could possibly help me.

I have linked the stack overflow question to make it easier to read and so I don't have to type everything all over again

https://stackoverflow.com/questions/54353274/camera-zoom-issue-on-iphone-x-iphone-xs-etc

Choose ordered sequence of CoreML models

We need to add a mechanism for adding models in a specific order. Using VNSequenceHandler or something like that, we can pass the observation from one model to another one, and that way, we should be able to handle a chain of events to get a "final" observation.

If we really want to get meta about it, should we be able to choose X number of sequences given loaded models? This is going to require some thought.

Camera Stats On Screen

There should be an elegant way to view all relevant information to the camera on the screen, such as resolution, FPS, etc. Additionally, this should involve a selective UI so that the user of Lumina can make changes to these values at any time.

could not install Lumina

Dear Friend,

i followed your instruction and add the Lumina to Podfile.
did update,install
when i opened the Xcode after build , i getting build failed with 46 error.
i using deployment target of 10.3 (would like to use 11.0)
and xcode 9.1

what i doing wrong :(

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.