Deep Learning Framework for Apple's tvOS, iOS and OS X
0. DeepLearningKit Publication
@misc{2015DeepLearningKit,
author = {Amund Tveit, Torbjørn Morland and Thomas Brox Røst},
title = {DeepLearningKit - an Open Source Deep Learning Framework for
Apple's iOS, OS X and tvOS developed in Metal and Swift},
url = {https://arxiv.org/abs/1605.04614},
howpublished = {Online}
}
1. DeepLearningKit Video Tutorials
1.1 How to Get Started with Deep Learning Kit for iOS (e.g. iPhone or iPad)
1.2 How to Get Started with Deep Learning Kit for OS X (e.g. Macbook or iMac)
1.3 How to Get Started with Deep Learning Kit for tvOS (new Apple TV)
@misc{2015DeepLearningKit,
author = {Amund Tveit, Torbjørn Morland and Thomas Brox Røst},
title = {DeepLearningKit - an Open Source Deep Learning Framework for
Apple's iOS, OS X and tvOS developed in Metal and Swift},
url = {https://arxiv.org/abs/1605.04614},
howpublished = {Online}
}
Hello,
I am a newbie in iOS and Deep learning. I went through the basics of Deep learning and Neural networks. The documentation doesn't say how to use this library?
I have a model with me which was written using Caffe Framework. How can I integrate it with DeepLearningKit?
Any detailed documentation available online?
P.S
I ran the DeepLearningiOSDemo app which is working perfectly on my iPhone.
I've been looking at a lot of MIT licensed based Swift AI and Deep Learning efforts and they are more than sufficient to do good work.
It begs the question, why the Apache License? Why not the MIT license?
If I build an App that uses the Google Cloud Platform and DeepLearningKit, aren't I under obligation, more obligation than MIT license, that prohibits my sale of the application or other requirements?
I am trying to convert trained caffemodel of size 33MB to JSON for deeplearning kit using caffemodel2json.py, surprisingly the size of the JSON output file is 10x, in my case it comes to 330MB. I feel accommodating that big file on iOS app will not be feasible option. Could you please guide me on how to procceed with this? Is there a way to reduce the file size by configuring py script?
Dear author, I have some question with metal programming about thread number and thread group number setting. I change the thredsPerGroup from (1,1,1) to (32,1,1), and change the threadGroups from
(1,1,1) to (number,1,1), where number is (vectorcount+31)/32. But I didn't see any change or improve at processing time. I wonder know did i do the right setting? thanks.
The DeepLearningKitForiOSDemoApp project file references bundle resources named "conv1.json" and "nin_cifar10_full.json", but fails to copy them to the application bundle because the files do not exist.
Probably just forgot to add these files to the github repository.
I'm super stoked to see DeepLearningKit. It is a great starting point for neural nets using Metal and swift. I had this idea since Apple revealed Metal the first time and it is great to see that someone has started to work on it :)
For me personally as I start to experimenting with DeepLearningKit and thinking of integrating it into real apps the first thing that comes to mind is:
Would be great if DeepLearningKit was a library (easy to drop into any project)
A library with unit tests, bench-marking and other nice metrics
CI support that supports the collaboration and fast evolution/development
I don't know if someone is already doing something familiar, but my plans are:
Turn DeepLearningKit in a library project
Add unit testing
Setup travis for CI - build and testing
Publish in CocoaPods and Carthage
Change the example code to use the library instead of the common files directly
main purpose would be to show that the q-learning loop i.e. the data input and the conv.net inside the q-learning algorithm is feasible to run on iphone. this can potentially be used to create better Game AI on e.g. Apple TV and iPhone/iPad
DeepLearningKit is missing basic conversion from e.g. UIImage to RGB (the example network supports 32x32x3 CIFAR RGB Image Format, but has no conversion from UIImage to it). Check out e.g. Drawing Images From Pixel Data – In Swift and Image Processing in iOS Part 1: Raw Bitmap Modification for inspiration.
Just starting and hence not head first too much. But it may be the new Swift 3 break the whole thing. I covert it but there is lot of changes. and accept them does not work.
I see in the code that MetalTensorDimensions(n: weight_shape[0], channels: weight_shape[1], width: weight_shape[2], height: weight_shape[3]).
Shouldn't the width to be the last dimension since caffe uses a row-major format? It is fine when the image is squared but not the general case.
protobuf: calling protoc Traceback (most recent call last): File "./caffemodel2json.py", line 65, in <module> subprocess.check_call(['protoc', '--proto_path', os.path.dirname(args.caffe_proto), '--python_out', args.codegenDir, args.caffe_proto]) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 535, in check_call retcode = call(*popenargs, **kwargs) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory