Giter Club home page Giter Club logo

ibm / watson-waste-sorter Goto Github PK

View Code? Open in Web Editor NEW
46.0 23.0 37.0 55.09 MB

Create an iOS phone application that sorts waste into three categories (landfill, recycling, compost) using a Watson Visual Recognition custom classifier

Home Page: https://developer.ibm.com/patterns/recycle-with-watson/

License: Apache License 2.0

Python 22.05% Ruby 8.09% Swift 69.86%
watson-visual-recognition visual-recognition ibm-cloud cloud-foundry ios ios-swift ibmcode

watson-waste-sorter's Introduction

Build Status

Create a custom Visual Recognition classifier for sorting waste

In this developer code pattern, we will create a mobile app, Python Server with Flask, and Watson Visual Recognition. This mobile app sends pictures of waste and garbage to be analyzed by a server app, using Watson Visual Recognition. The server application will use pictures of common trash to train Watson Visual Recognition to identify various categories of waste, e.g. recycle, compost, or landfill. A developer can leverage this to create their own custom Visual Recognition classifiers for their use cases.

When the reader has completed this Code Pattern, they will understand how to:

  • Create a Python server with Flask that can utilize the Watson Visual Recognition service for classifying images.
  • Create a Visual Recognition custom classifier using the Web UI or command line.
  • Create a mobile application that can send pictures to a server app for classification using Visual Recognition.

architecture_diagram

Flow

  1. User interacts with the mobile app and captures an image.
  2. The image on the mobile phone is passed to the server application running in the cloud.
  3. The server sends the image to Watson Visual Recognition Service for analysis and sends back the classification result to the mobile app.
  4. Visual Recognition service classifies the image and returns the information to the server.

Included components

  • Watson Visual Recognition: Visual Recognition understands the contents of images - visual concepts tag the image, find human faces, approximate age and gender, and find similar images in a collection.

Featured Technologies

  • Mobile: Systems of engagement are increasingly using mobile technology as the platform for delivery.
  • Flask: A micro web development framework for Python.

Watch the Video

Prerequisite

Create an IBM Cloud account and install the Cloud Foundry CLI on your machine.

Steps

  1. Create your visual recognition service
  2. Deploy the server application
  3. Create the mobile application and connect it to the server
  4. Using the Waste Sorter mobile application

Deploy the Server Application to IBM Cloud

You can either go through Step 1 and 2 to create your application server, or

You can simply click the Deploy to IBM Cloud button and Create the toolchain to provision, train, and run your visual recognition server. Then, go to the IBM Cloud Dashboard to verify your server is running and take note of your server application's endpoint. Once you done that, you can move on to Step 3 and deploy your mobile application.

Deploy to IBM Cloud

1. Create your visual recognition service

First, we need to clone this repository

git clone https://github.com/IBM/watson-waste-sorter
cd watson-waste-sorter

Then, we need to login to the Cloud Foundry CLI.

cf login -a https://api.ng.bluemix.net # Please use a different API endpoint if your IBM Cloud account is not in US-South

Next, provision a Lite tier Visual Recognition Service and name it wws-visual-recognition. You can provision it using the above link or the command below.

cf create-service watson_vision_combined lite wws-visual-recognition

2. Deploy the server application

Now go to the server repository, push your server application to Cloud Foundry

cd server
cf push

Once the deployment succeeds, your backend server will create the custom model and be able to classify the different kinds of waste once the model finishes training. Please take note of your server application's endpoint as you will need it in the next step. Now let's go ahead and create our mobile app to use this classifier.

3. Create the mobile application and connect it to the server

In order to test the full features for this application, you need to have Xcode 8.0 or above installed and an IOS device to deploy the application.

Now Open your Xcode and select Open another project..., then select the mobile-app/WatsonWasteSorter.xcworkspace file and click Open.

Next, you need to modify the WatsonWasteSorter/Info.plist with the endpoint of the API server you just deployed. Replace the SERVER_API_ENDPOINT's value section with your server endpoint with extension /api/sort.

plist

Next, you will need to sign your application with your Apple account. Go to the mobile app's General section, under Signing's Team select your team or add an account. Now your mobile app is signed and you are ready to deploy your Waste Sorter app.

Note: If you have trouble signing your Mobile app, please refer to https://help.apple.com/xcode/mac/current/#/dev60b6fbbc7

Now, connect your IOS device to your machine and select your device in Xcode. Click the run icon and your mobile app will be installed on your device.

4. Using the Waste Sorter mobile application

Congratulations, at this point you should have a mobile app that can classify waste using your camera. Now you can just simply point your camera to any waste and click the camera icon to take a picture. Then the application should tell you where the waste should go like this.

screenshot

Now you should have a better idea on how to sort your trash. Note that if you have a result that said unclassified, it means your image is either too blurry or the waste is too far. In that case just simply point your camera closer and retake a new picture.

If you want to classify another waste item, simply click the center of the screen.

Troubleshooting

  • To clean up, simply delete your mobile app. Then you can delete your server application via the IBM Cloud Dashboard.

Links

Learn more

  • Artificial Intelligence Code Patterns: Enjoyed this Code Pattern? Check out our other AI Code Patterns.
  • AI and Data Code Pattern Playlist: Bookmark our playlist with all of our Code Pattern videos
  • With Watson: Want to take your Watson app to the next level? Looking to utilize Watson Brand assets? Join the With Watson program to leverage exclusive brand, marketing, and tech resources to amplify and accelerate your Watson embedded commercial solution.

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

Apache Software License (ASL) FAQ

watson-waste-sorter's People

Contributors

dolph avatar imgbot[bot] avatar kant avatar ljbennett62 avatar maggix avatar markstur avatar rhagarty avatar scottdangelo avatar stevemart avatar tomcli avatar tqtran7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

watson-waste-sorter's Issues

Suggestion to optimize identification

Watson visual recognition has a built in food classifier, we could make a call to that before going to our built one and if it comes back with decent confidence then we can assume it's food that should go to a landfill compost.

image result not classified correctly

Hi
I followed Step 1 and 2 to create my application server and deployed to cloud foundry.
As I take images through my devices the classified results were of different colors, instead of the different categories.
Below is the result I get in xcode console.

2019-04-29 17:26:39.851877-0700 WatsonWasteSorter[512:25891] [MC] Reading from public effective user settings.
{
    "confident score" = "0.884";
    result = "emerald color";
    "status code" = 200;
}

Please let me know if there's something that I've done incorrectly.
Thanks!

API Endpoint

Hi!

Can someone please help to explain what other API endpoint can be used if you're not located in US-South? Thanks!

1. Create your visual recognition service
First, we need to clone this repository

git clone https://github.com/IBM/watson-waste-sorter
cd watson-waste-sorter
Then, we need to login to the Cloud Foundry CLI.

cf login -a https://api.ng.bluemix.net # Please use a different API endpoint if your IBM Cloud account is not in US-South
Next, provision a Lite tier Visual Recognition Service and name it wws-visual-recognition. You can provision it using the above link or the command below.

cf create-service watson_vision_combined lite wws-visual-recognition

Deploy to IBM Cloud does not create the custom classifier

The documentation in the README indicates that you can press the Deploy to IBM Cloud button and then skip to step 3. But if one does this, there will not be a custom classifier created.
It would be pretty optimal (and slick) to add this to the server/run.py . You could:

  1. Check to see if classifier exists and set the id if True
    You've already done this the sort() method, but the problem exists that you only return something from set_classifier() if classifier['name'] == 'waste' and classifier['status'] == 'ready'
    IF that statement is not true, you Never set the classifier_id in sort.

I would recommend That set_classifier() returns null as a default, and that a conditional checks for the return value from set_classifier in the sort() method.
IF you get an ID, carry on.
IF you get null, call a new routine create_classifier() and upload the training .zip files in this routine, etc (or shell out and call your custom_model.sh)

ALSO, you need a run command in manifest.yaml to start the server:

command: python run.py

I'd document this in the README and warn the user that it takes some time (usually 5-10 minutes) to train the classifier. Maybe document how to see if it is ready via UI or CLI.

TODO after meeting on 1/12/2018

  1. Create a mobile front-end. Suggest using Cordova to create since it can produce content that will work on both Android and iOS. See https://github.com/IBM/watson-vehicle-damage-analyzer/tree/master/mobile as an example. @XiaoguangMo agreed to give the rest of the team a one hour education session on how to demonstrate a quick app built locally and run on an iPhone.
  2. Add a "deploy to bluemix" button -- See https://github.com/IBM/watson-vehicle-damage-analyzer/blob/master/server/lib/watson-visRec-setup.js as an example. This will automatically create a classifier based on content checked into the repo.
  3. More pictures! These should be checked into the repo.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.