Giter Club home page Giter Club logo

openwhisk-darkvisionapp's Introduction

Cloud Functions Dark Vision - Discover dark data in videos with IBM Watson and IBM Cloud Functions

Build Status

Dark Vision is a technology demonstration leveraging Cloud Functions and Watson services. If you are looking for an official and supported IBM offering head over to the Watson Video Enrichment product. This product uses Watson APIs and additional technology to enrich video assets.

Think about all the videos individuals and companies (Media and Entertainment) accumulate every year. How can you keep track of what's inside of them so you can quickly search and find what you're looking for? Show me all the videos that have Arc De Triomphe in it. or Show me the all the videos that talk about peaches

What if we used artificial intelligence to process these videos to tell us which video has what we're looking for without us having to watch all of them.

Dark Vision is an application that processes videos to discover what's inside of them. By analyzing individual frames and audio from videos with IBM Watson Visual Recognition and Natural Language Understanding, Dark Vision builds a summary with a set of tags, famous people or landmarks detected in the video. Use this summary to enhance video search and categorization.

Watch this Youtube video to learn more about the app

Dark Vision

Overview and Architecture

Built using IBM Cloud, the application uses:

Extracting frames and audio from a video

The user uploads a video or image using the Dark Vision web application, which stores it in a Cloudant Database (1). Once the video is uploaded, Cloud Functions detects the new video (2) by listening to Cloudant changes (trigger). Cloud Functions then triggers the video and audio extractor action (3). During its execution, the extractor produces frames (images) (4), captures the audio track (5) and stores them in Cloudant (6, 7). The frames are then processed using Watson Visual Recognition, the audio with Watson Speech to Text and Natural Language Understanding. The results are stored in the same Cloudant DB. They can be viewed using Dark Vision web application or the iOS application.

Object Storage can complement Cloudant. When doing so, video and image medadata are stored in Cloudant and the media files are stored in Object Storage.

Architecture

extract_video digraph G { node [fontname = "helvetica"] rankdir=LR /* stores a video */ user -> storage [label="1"] /* cloudant change sent to openwhisk */ storage -> openwhisk [label="2"] /* openwhisk triggers the extractor */ openwhisk -> extractor [label="3"] /* extractor produces image frames and audio */ extractor -> frames [label="4"] extractor -> audio [label="5"] /* frames and audio are stored */ frames -> storage [label="6"] audio -> storage [label="7"] /* styling ****/ frames [label="Image Frames"] audio [label="Audio Track"] storage [shape=circle style=filled color="%234E96DB" fontcolor=white label="Data Store"] openwhisk [shape=circle style=filled color="%2324B643" fontcolor=white label="Cloud Functions"] } extract_video )

Processing frames and standalone images

Whenever a frame is created and uploaded (1), Cloudant emits a change event (2) and Cloud Functions triggers the analysis (3). The analysis (4) is persisted with the image (5).

Architecture

image_analysis digraph G { node [fontname = "helvetica"] /* stores a image */ frame -> storage [label="1"] /* cloudant change sent to openwhisk */ storage -> openwhisk [label="2"] /* openwhisk triggers the analysis */ openwhisk -> analysis [label="3"] /* extractor produces image frames */ {rank=same; frame -> storage -> openwhisk -> analysis -> watson [style=invis] } /* analysis calls Watson */ analysis -> watson [label="4"] /* results are stored */ analysis -> storage [label="5"] /* styling ****/ frame [label="Image Frame"] analysis [label="analysis"] storage [shape=circle style=filled color="%234E96DB" fontcolor=white label="Data Store"] openwhisk [shape=circle style=filled color="%2324B643" fontcolor=white label="Cloud Functions"] watson [shape=circle style=filled color="%234E96DB" fontcolor=white label="Watson\nVisual\nRecognition"] } image_analysis )

Processing audio

Whenever the audio track is extracted (1), Cloudant emits a change event (2) and Cloud Functions triggers the audio analysis (3).

Extract the audio transcript

Extracting the transcript from an audio track using the Speech to Text service may take more than 5 minutes depending on the video. Because Cloud Functions actions have a 5 minutes limit, waiting in the action for the audio processing to complete is not possible for longer videos. Fortunately the Speech to Text service has a very nice asynchronous API. Instead of waiting for Speech to Text to process the audio, Dark Vision sends the audio file to Speech to Text (4) and Speech to Text will notify Dark Vision with the transcript when it is done processing the audio (5). The result is attached to the audio document (6).

Architecture

extract_audio digraph G { node [fontname = "helvetica"] audio -> storage [label="1"] storage -> openwhisk [label="2"] openwhisk -> speechtotext [label="3"] speechtotext -> watson [label="4 - Start Recognition"] watson -> speechtotext [label="5 - Receive transcript"] speechtotext -> storage [label="6 - Store transcript"] audio [label="Audio Track"] {rank=same; audio -> storage -> openwhisk -> speechtotext -> watson [style=invis] } storage [shape=circle style=filled color="%234E96DB" fontcolor=white label="Data Store"] openwhisk [shape=circle style=filled color="%2324B643" fontcolor=white label="Cloud Functions"] speechtotext [label="speechtotext"] watson [shape=circle style=filled color="%234E96DB" fontcolor=white label="Watson\nSpeech to Text"] } extract_audio )

Analyze the transcript

Once the transcript is stored (1), the text analysis (3) is triggered (2) to detect concepts, entities and emotion (4). The result is attached to the audio (5).

Architecture

audio_analysis digraph G { node [fontname = "helvetica"] transcript -> storage [label="1"] storage -> openwhisk [label="2"] openwhisk -> textanalysis [label="3"] textanalysis -> nlu [label="4"] textanalysis -> storage [label="5"] /* extractor produces image frames */ {rank=same; transcript -> storage -> openwhisk -> textanalysis -> nlu [style=invis] } /* styling ****/ transcript [label="Transcript"] textanalysis [label="textanalysis"] storage [shape=circle style=filled color="%234E96DB" fontcolor=white label="Data Store"] openwhisk [shape=circle style=filled color="%2324B643" fontcolor=white label="Cloud Functions"] nlu [shape=circle style=filled color="%234E96DB" fontcolor=white label="Natural\nLanguage\nUnderstanding"] } ) audio_analysis

Prerequisites

  • IBM Cloud account. Sign up for Bluemix, or use an existing account.
  • Docker Hub account. Sign up for Docker Hub, or use an existing account.
  • Node.js >= 6.9.1
  • XCode 8.0, iOS 10, Swift 3 (For iOS application)

Deploying Dark Vision automatically in IBM Cloud

Dark Vision comes with a default toolchain you can use to deploy the solution with few clicks. If you want to deploy it manually, you can skip this section.

  1. Ensure your organization has enough quota for one web application using 256MB of memory and 4 services.

  2. Click Deploy to IBM Cloud to start the IBM Cloud DevOps wizard:

Deploy to IBM Cloud

โš ๏ธ Dark Vision can currently only be deployed in the US South region.

  1. Select the GitHub box.

    1. Decide whether you want to fork/clone the Dark Vision repository.
    2. If you decide to Clone, set a name for your GitHub repository.
  2. Select the Delivery Pipeline box.

    1. Enter an IBM Cloud API key.
    2. Select the resource group, region, organization and space where you want to create services and deploy the web application.
    3. Set the name of the Dark Vision web application. Pick a unique name to avoid conflicts.
    4. Optionally set the admin username and password for the application. When set, the application will prompt for this username and password when uploading videos/images, when resetting a video or an image. If the username and password are not defined, any visitor can upload videos/images for processing.
    5. If you already have a Watson Visual Recognition service instance you want to reuse, retrieve its API key from the credentials and set the value in the form. If you leave the field empty, the pipeline will create a new service instance automatically.
  3. Click Create.

  4. Select the Delivery Pipeline named darkvision.

    1. In the Environment Properties, you can change the service plans for the services to be created. COS_PLAN can be changed to Standard if you are already using the Lite plan in your account.
    2. When ready, press the Run Stage button in the DEPLOY stage to run the pipeline.
  5. Wait for the Deploy job to complete.

  6. Access the Dark Vision app when it's ready and start uploading videos and images!

iOS application to view the results (Optional)

The iOS application is a client to the API exposed by the web application to view the results of the analysis of videos. It is an optional piece.

To configure the iOS application, you need the URL of the web application deployed above. The web app exposes an API to list all videos and retrieve the results.

  1. Open ios/darkvision.xcworkspace with XCode

  2. Open the file darkvision/darkvision/model/API.swift

  3. Set the value of the constant apiUrl to the application host previously deployed.

  4. Save the file

Running the iOS application in the simulator

  1. Start the application from XCode with iPad Air 2 as the target

  1. Browse uploaded videos

  1. Select a video

Results are made of tags returned by Watson. The tags with the highest confidence score are shown. Tap a tag to change the main image to the frame where this tag was detected.

Code Structure

Cloud Functions - Deployment script

File Description
deploy.js Helper script to install, uninstall, update the Cloud Functions trigger, actions, rules used by Dark Vision.

Cloud Functions - Change listener

File Description
changelistener.js Processes Cloudant change events and calls the right actions. It controls the processing flow for videos and frames.

Cloud Functions - Frame extraction

The frame extractor runs as a Docker action created with the Cloud Functions Docker SDK:

  • It uses ffmpeg to extract frames and audio from the video.
  • It is written as a nodejs app to benefit from several nodejs helper packages (Cloudant, ffmpeg, imagemagick)
File Description
Dockerfile Docker file to build the extractor image. It pulls ffmpeg into the image together with node. It also runs npm install for both the server and client.
extract.js The core of the frame extractor. It downloads the video stored in Cloudant, uses ffmpeg to extract frames and video metadata, produces a thumbnail for the video. By default it produces around 15 images for a video. This can be changed by modifying the implementation of getFps. First 15 min of audio is also exported.
app.js Adapted from the Cloud Functions Docker SDK to call the extract.js node script.

Cloud Functions - Frame analysis

analysis.js holds the JavaScript code to perform the image analysis:

  1. It retrieves the image data from the Cloudant document. The data has been attached by the frame extractor as an attachment named "image.jpg".
  2. It saves the image file locally.
  3. If needed, it resizes the image so that it matches the requirements of the Watson service
  4. It calls Watson
  5. It attachs the results of the analysis to the image and persist it.

The action runs asynchronously.

The code is very similar to the one used in the Vision app.

Cloud Functions - Audio analysis

File Description
speechtotext.js Uses Speech to Text to transcript the audio. It acts as the callback server for the asynchronous API of Speech to Text service. The speechtotext action is exposed as a public HTTP endpoint by the deploy.js script.
textanalysis.js Calls Natural Language Understanding on the transcript.

Web app

The web application allows to upload videos (and images). It shows the video and image catalog and for each video the extracted frames.

File Description
app.js The web app backend handles the upload of videos/images, and exposes an API to retrieve all videos, their frames, to compute the summary
Services Services used by controllers
Home page Controller and view for the home page
Video page Controller and view for the video detail page

Shared code between Cloud Functions actions and web app

These files are used by the web application and the Cloud Functions actions. They are automatically injected in the Cloud Functions actions by the deploy.js script and during the build of the Docker image. These scripts have dependencies on Cloudant, async, pkgcloud which are provided by default in Cloud Functions Node.js actions.

File Description
cloudantstorage.js Implements API on top of Cloudant to create/read/update/delete video/image metadata and to upload files
cloudobjectstorage.js Implements the file upload operations on top of Cloud Object Storage. Used by cloudantstorage.js when Cloud Object Storage is configured.
cloudant-designs.json Design documents used by the API to expose videos and images. They are automatically loaded into the database when the web app starts for the first time.

iOS

The iOS app is an optional part of the Dark Vision sample app. It uses the API exposed by the web application to display the videos in the catalog and their associated tags.

File Description
API.swift Calls the web app API. Update the constant apiUrl to map to the location of your web app.

Contribute

Please create a pull request with your desired changes.

Troubleshooting

Dark Vision correctly processes video frames but does not process the audio track

This has been reported several times when using the toolchain. It is tracked as issue 51. Make sure to look at the toolchain DEPLOY log to confirm the problem. Locate the line Registering Speech to Text callback... to identify the error.

Cloud Functions

Polling activations is good start to debug the Cloud Functions action execution. Run

ibmcloud fn activation poll

and upload a video for analysis.

Web application

Use

ibmcloud cf logs <appname>

to look at the live logs for the web application

License

See License.txt for license information.

openwhisk-darkvisionapp's People

Contributors

l2fprod avatar rvennam avatar rvennam987 avatar sdague avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openwhisk-darkvisionapp's Issues

Use separate Cloudant dbs for videos and images

By doing so, we could have two triggers, one listening for changes of the video, one for images. This could remove the changelistener and call the actions directly. The actions would decide what to do with the event.

[Visual Bug] Header Icon Sizing & Spacing

Problem

screen shot 2017-03-06 at 11 27 46 am

  • The icons are slightly larger than needed, 32px x 32px.
  • Spacing between icons is too large, 40px.

Solution

screen shot 2017-03-06 at 11 28 50 am

  • Icons should be a max of 28px by 28px.
  • Icons should have a left margin of 20px (see below).
    spacing
  • The Cloud Upload icon should be 22px x 16px with a left margin of 8px(see below). This is the only icon that should have different dimensions.

screen shot 2017-03-06 at 11 38 44 am

Updated visual design for Dark Vision web app

How can the Dark Vision web app can be improved to show the new insights coming from the audio stream?

Capabilities of Dark Vision V1 web user interface:

  • Upload video
  • Upload image
  • View list of uploaded videos
  • View list of uploaded images
  • View results for a video
  • View individual images extracted from the video
  • View summary results
  • View individual results for each image
  • Relaunch an analysis of the video (extract + image process)
  • Restart the analysis of all images (image process only)
  • View results for an image
  • Link to GitHub source code
  • Link to Youtube video

Under load, some frames are not processed

When the video frames are stored in Cloudant, analysis.js retrieves the image file without doing any retry.

If many frames are being analyzed, Dark Vision may hit Cloudant rate limit and fail to retrieve the image. It usually results in "not a jpeg file" error when analyzing the image.

Audio Processing stuck

After uploading a 30sec mp4 clip to my deployed Dark Vision app, it's getting stuck at 90% every time it tries processing audio - it's been stuck for 1hr+

Any known reasons?

screen shot 2017-03-13 at 17 27 01

[Visual Bug]Dark Vision Logotype

Problem

screen shot 2017-03-06 at 11 20 01 am

The space between the words OpenWhisk and Dark Vision is too large.

Solution

screen shot 2017-03-06 at 11 21 48 am

As opposed to the current 10px spacing between words, there should just be a normal keyboard space.

[Visual Bug] Tag Filtering

Problem

The styling and behavior of the Filter frames by tag is slightly wonky.

screen shot 2017-03-13 at 1 07 57 pm

screen shot 2017-03-13 at 1 09 09 pm

Solution

Lets start by making the copy Filter frames by tag font-style: italic; and color: #B8C1C1.
screen shot 2017-03-13 at 1 12 20 pm

Lets also extend the red timeline to the bottom of the screen when filtering.

screen shot 2017-03-13 at 1 15 24 pm

If this ends up looking weird of clumsy we can work on a different solution.

Dark Vision fails to deploy automatically when the Bluemix space name contains spaces

The issue seems to be in Speech to Text. It fails if the callback URL has a space in its name:

{"code":400,"code_description":"Bad Request","error":"unable to register callback url 'https://openwhisk.ng.bluemix.net/api/v1/experimental/web/[email protected]_Game of Bluemix/vision/speechtotext.http', callback server responded with status code 505"}

Another related issue fixed by PR #65 was with Cloud Functions:

  ___                __        ___     _     _    
 / _ \ _ __   ___ _ _\ \      / / |__ (_)___| | __
| | | | '_ \ / _ \ '_ \ \ /\ / /| '_ \| / __| |/ /
| |_| | |_) |  __/ | | \ V  V / | | | | \__ \   < 
 \___/| .__/ \___|_| |_|\_/\_/  |_| |_|_|___/_|\_\
      |_|                                         
Retrieving OpenWhisk authorization key...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1757    0     0  100  1757      0   8514 --:--:-- --:--:-- --:--:--  8487
100  1757    0     0  100  1757      0    869  0:00:02  0:00:02 --:--:--   870
100  1757    0     0  100  1757      0    581  0:00:03  0:00:03 --:--:--   581
100  4067    0  2310  100  1757    737    561  0:00:03  0:00:03 --:--:--   738
error: syntax error, unexpected $end, expecting QQSTRING_TEXT or QQSTRING_INTERP_START or QQSTRING_END
.namespaces[] | select(.name == "[email protected]_Game
                                 ^^^^^^^^^^^^^^^^^^^^^
1 compile error
error: syntax error, unexpected $end, expecting QQSTRING_TEXT or QQSTRING_INTERP_START or QQSTRING_END
.namespaces[] | select(.name == "[email protected]_Game
                                 ^^^^^^^^^^^^^^^^^^^^^
1 compile error
Speech to Text OpenWhisk action is accessible at https://openwhisk.ng.bluemix.net/api/v1/experimental/web/[email protected]_Game of Bluemix/vision/speechtotext.http
 _   _      _      

Cache image files

As by default we use the rate limited plan of Cloudant, we should reduce the load on Cloudant when browsing the web app.

Caching comes to mind.

We could be caching images loaded from Cloudant by writing them on the local disk (we don't mind if the disk gets lost on restart) and having them served directly by node. Once uploaded images do not change - only the video thumbnail may change if the analysis is retriggered.

Allow configuration of the number of extracted frames

The number of extracted frames is currently set in extract.js with an option to change it by rebuilding the Docker image.

Instead this number could be a property of the video and configured from the upload dialog.

Use Cloudant Lite plan

Need to make sure the Dark Vision code works with the Lite plan and correctly retries if it hits the rate limits.

After first deploy, video frames are processed but not the audio

Seems to come from an issue registering the Speech to Text callback in the toolchain.

The call to register fails with an error 406. It could be due to a concurrency issue where the speechtotext web action is not yet exposed by OpenWhisk when we try to call it during registration.

A re-run of the pipeline usually solves the issue.

{"code":400,"code_description":"Bad Request","error":"unable to register callback url 'https://openwhisk.ng.bluemix.net/api/v1/experimental/web/user_space/vision/speechtotext.http', callback server responded with status code 406"}

[Visual Bug] Hover-States

Problem

Currently the Dark Vision app lacks hover-states for most link items.

Solution

Here's a first-pass at some initial hover-states. We may need to tweak some of them as we go.

1. Header Icons:
screen shot 2017-03-13 at 12 00 34 pm

2. Processing Details:
screen shot 2017-03-13 at 12 01 43 pm

3. Filter Bar:
screen shot 2017-03-13 at 12 02 01 pm

4: Breadcrumb:
screen shot 2017-03-13 at 12 02 22 pm

5: Scrubber:
screen shot 2017-03-13 at 12 03 00 pm

6: Video Overview:
screen shot 2017-03-13 at 12 03 22 pm

It would also be nice to play with some transition easing in areas we haven't yet. Lets try adding transition: all 0.25s ease; to 1,2, 3 & 4.

[Visual Bug] Image Title Border Bottom

Problem

screen shot 2017-03-06 at 2 15 47 pm
Image titles are missing the gray border-bottom that separates them.

Solution

screen shot 2017-03-06 at 2 16 44 pm

  • Try border-bottom: 1px solid #EAEAEA; and see how that looks.
  • There should be 20px padding between the border-bottom and the image.

Put videos and images in Object Storage

Instead of storing the attachments into Cloudant, put them in Object Storage and serve images/videos directly from Object Storage.

  • abstract storage so that it can switch between putting media in Cloudant or in Object Storage
  • implement media storage in Object Storage
  • write instructions on how to use Object Storage with Dark Vision
  • use Object Storage if configured, else default to Cloudant

Use Cloud Object Storage instead of Object Storage Swift

  • Replace Object Storage Swift with the more recent Cloud Object Storage S3
  • Offer the option in the toolchain to create a Cloud Object Storage or use an existing service instance (whether the instance if an actual Cloud Object Storage or a user-provided service with the right credentials set).

Extract transcript from video with Speech to Text

The audio from the video is another dark data to process. First step is to get the text.

  • extract audio
  • store audio
  • add Speech To Text service
  • new action calling Speech To Text and persisting output to Cloudant when audio is added as attachment
  • show Speech To Text in web ui

[Visual Bug] Improved Detail Page Header

Problem

screen shot 2017-03-13 at 11 01 44 am

We should keep our Video Detail Header (above) consistent with how we display data in our new Processing Details menu (below).

screen shot 2017-03-13 at 11 02 10 am

Solution

screen shot 2017-03-13 at 11 12 39 am

Redlines:
screen shot 2017-03-13 at 11 11 31 am

Frame Detail:
screen shot 2017-03-13 at 11 16 53 am

A large image file may fail to process

Take this image http://www.socialdemokraterna.se/upload/Stefan_Lofven/Bilder/Stefan%20Lofven.jpg

It is a 4MB file, 5616 x 3744. ImageMagick conversion fails - seems like it requires more memory than available to the action.

Activation: analysis (ccc)
[
    "2017-04-03T08:35:50.39339173Z  stdout: [ bbb ] Processing image.jpg from document",
    "2017-04-03T08:35:51.875478614Z stdout: [ bbb ] KO ( 1.479 s) { Error: Command failed:",
    "2017-04-03T08:35:51.875511335Z stdout: at ChildProcess.onExit (/nodejsAction/node_modules/gm/lib/command.js:301:17)",
    "2017-04-03T08:35:51.875520462Z stdout: at emitTwo (events.js:106:13)",
    "2017-04-03T08:35:51.875526287Z stdout: at ChildProcess.emit (events.js:191:7)",
    "2017-04-03T08:35:51.875532128Z stdout: at maybeClose (internal/child_process.js:877:16)",
    "2017-04-03T08:35:51.87553974Z  stdout: at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5) code: null, signal: 'SIGKILL' }"
]

better logging and diagnostic

there should be better logging to track what is going during deployment and execution

for deploying print more what is happening so more help when it fails

for operations print more in logs - for example currently my demo deployment processed two videos but stopped after that with no error i could find ...

creating trigger fails with cryptic "error": null "status": "application error"

for some reason creating trigger fails with cryptic error:

macs-mbp:processing aslom$ wsk trigger create vision-cloudant-trigger --feed vision-cloudant/changes    -p dbname cloudant-a1 -p includeDoc true
error: failed to create trigger feed vision-cloudant-trigger
{
    "activationId": "75e2430d583741768406f1f558653cbc",
    "annotations": [],
    "end": 1468607989929,
    "logs": [],
    "name": "changes",
    "namespace": "[email protected]",
    "publish": false,
    "response": {
        "result": {
            "error": null
        },
        "status": "application error",
        "success": false
    },
    "start": 1468607986701,
    "subject": "[email protected]",
    "version": "0.0.52"
}

no matter what parameters are passed or not ...

macs-mbp:processing aslom$ wsk trigger create vision-cloudant-trigger --feed vision-cloudant/changes
error: failed to create trigger feed vision-cloudant-trigger
{
    "activationId": "a5719acbaef64a83802e92b7e6d0ef33",
    "annotations": [],
    "end": 1468605016622,
    "logs": [],
    "name": "changes",
    "namespace": "[email protected]",
    "publish": false,
    "response": {
        "result": {
            "error": null
        },
        "status": "application error",
        "success": false
    },
    "start": 1468605014490,
    "subject": "[email protected]",
    "version": "0.0.52"
}

and here is full script output:

macs-mbp:processing aslom$ ./deploy-darkvision.sh --install
Current namespace is [email protected]_dev.
Creating vision package
ok: created package vision
Adding service credentials as parameter
ok: updated package vision
Binding cloudant
ok: created binding vision-cloudant
Creating trigger
error: failed to create trigger feed vision-cloudant-trigger
{
    "activationId": "4447d0dc800041b5bb5b50d6f83f5131",
    "annotations": [],
    "end": 1468604389260,
    "logs": [],
    "name": "changes",
    "namespace": "[email protected]",
    "publish": false,
    "response": {
        "result": {
            "error": null
        },
        "status": "application error",
        "success": false
    },
    "start": 1468604388527,
    "subject": "[email protected]",
    "version": "0.0.52"
}
Creating actions
ok: created action extractor
ok: created action analysis
Creating change listener
ok: created action vision-cloudant-changelistener
Enabling change listener
error: [email protected]_dev/vision-cloudant-trigger does not exist (code 87972)
macs-mbp:processing aslom$

Support youtube video

I see that we have to upload a mp4 file.
But it will really nice if we could put a youtube link and openwhisk do the job from youtube.

Perform the video summary on the client with configurable thresholds

The API "/api/videos/:id" in web/app.js performs the filtering of keywords/tags/faces to build the video summary. It ends up providing a static view for a video as the thresholds and minimum count are configured in app.js.

Instead we could return the full data to the client and have the threshold configurable in the client. This way one could play with knob-like controls to show more/less data.

Provide a toolchain to deploy webapp and actions

To ease the deployment of the app, add a toolchain. The toolchain will create the services, the webapp and deploy the OpenWhisk actions/triggers.

  • build extractor Docker image with travis and push it to Docker Hub (only for master branch)
  • create services
  • create service keys
  • set env variables from service keys
  • initialize database
  • deploy web app
  • retrieve openwhisk key for space using accessToken from ~/.cf/config.json
  • deploy openwhisk artifacts
  • toolchain: prompt for GitHub repo
  • toolchain: prompt for app name
  • toolchain: prompt for login and password
  • toolchain: allow customization of Cloudant db, Docker image name, Openwhisk host

Cache API calls

Counterpart to #37 but to cache the calls made to Cloudant (retrieving videos, images and summary) as they can get quite consuming. However how to invalidate the cache when there is a new upload or when videos, images are being processed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.