Giter Club home page Giter Club logo

pipeline-server's Introduction

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework

Overview

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is an open-source streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines for the Cloud or at the Edge.

Media analytics is the analysis of audio & video streams to detect, classify, track, identify and count objects, events and people. The analyzed results can be used to take actions, coordinate events, identify patterns and gain insights across multiple domains: retail store and events facilities analytics, warehouse and parking management, industrial inspection, safety and regulatory compliance, security monitoring, and many other.

Backend libraries

Intel® DL Streamer Pipeline Framework is optimized for performance and functional interoperability between GStreamer* plugins built on various backend libraries

This page contains a list of elements provided in this repository.

Installation

Please refer to Install Guide for installation options

  1. Install APT packages
  2. Run Docker image
  3. Compile from source code
  4. Build Docker image from source code

Samples

Samples available for C/C++ and Python programming, and as gst-launch command lines and scripts.

NN models

Intel® DL Streamer supports NN models in OpenVINO™ IR and ONNX* formats:

  • Refer to OpenVINO™ Model Optimizer how to convert model into OpenVINO™ IR format
  • Refer to training frameworks documentation how to export model into ONNX* format

Or you can start from over 70 pre-trained models in OpenVINO™ Open Model Zoo and corresponding model-proc files (pre- and post-processing specification) in /opt/intel/dlstreamer/samples/model_proc folder. These models include object detection, object classification, human pose detection, sound classification, semantic segmentation, and other use cases on SSD, MobileNet, YOLO, Tiny YOLO, EfficientDet, ResNet, FasterRCNN and other backbones.

Reporting Bugs and Feature Requests

Report bugs and requests on the issues page

Other Useful Links


* Other names and brands may be claimed as the property of others.

pipeline-server's People

Contributors

ajcasagrande avatar akwrobel avatar avenkats avatar cgdougla avatar darrindu avatar fkhoshne avatar leopck avatar liz-lawrens avatar pk1d3v avatar slokesha avatar tobiasmo1 avatar tthakkal avatar vidyasiv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pipeline-server's Issues

/delete comand

I have made

def delete_pipeline(instance_id, pipeline, version, video_analytics_serving, camera):
    """Fetch status of requested pipeline"""
    try:
        status_response = requests.delete(
            urljoin(video_analytics_serving, "/".join([pipeline, str(version), str(instance_id)])), timeout=TIMEOUT)
        if status_response.status_code == 200:
            return json.loads(status_response.text)
    except requests.exceptions.RequestException as request_error:
        output(camera, request_error)
    return None

but it raises a fault

2020-09-05 19:58:41,040 Exception on /pipelines/persons_detector_no_face/2/2 [DELETE]
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1952, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.6/dist-packages/flask_cors/extension.py", line 161, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1821, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/decorator.py", line 49, in wrapper
    return self.api.get_response(response, self.mimetype, request)
  File "/usr/local/lib/python3.6/dist-packages/connexion/apis/flask_api.py", line 136, in get_response
    return cls._get_response(response, mimetype=mimetype, extra_context={"url": flask.request.url})
  File "/usr/local/lib/python3.6/dist-packages/connexion/apis/abstract.py", line 277, in _get_response
    framework_response = cls._response_from_handler(response, mimetype, extra_context)
  File "/usr/local/lib/python3.6/dist-packages/connexion/apis/abstract.py", line 331, in _response_from_handler
    return cls._build_response(mimetype=mimetype, data=response, extra_context=extra_context)
  File "/usr/local/lib/python3.6/dist-packages/connexion/apis/flask_api.py", line 173, in _build_response
    data, status_code, serialized_mimetype = cls._prepare_body_and_status_code(data=data, mimetype=mimetype, status_code=status_code, extra_context=extra_context)
  File "/usr/local/lib/python3.6/dist-packages/connexion/apis/abstract.py", line 403, in _prepare_body_and_status_code
    body, mimetype = cls._serialize_data(data, mimetype)
  File "/usr/local/lib/python3.6/dist-packages/connexion/apis/flask_api.py", line 190, in _serialize_data
    body = cls.jsonifier.dumps(data)
  File "/usr/local/lib/python3.6/dist-packages/connexion/jsonifier.py", line 44, in dumps
    return self.json.dumps(data, **kwargs) + '\n'
  File "/usr/local/lib/python3.6/dist-packages/flask/json/__init__.py", line 211, in dumps
    rv = _json.dumps(obj, **kwargs)
  File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps
    **kw).encode(obj)
  File "/usr/lib/python3.6/json/encoder.py", line 201, in encode
    chunks = list(chunks)
  File "/usr/lib/python3.6/json/encoder.py", line 430, in _iterencode
    yield from _iterencode_dict(o, _current_indent_level)
  File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict
    yield from chunks
  File "/usr/lib/python3.6/json/encoder.py", line 437, in _iterencode
    o = _default(o)
  File "/usr/local/lib/python3.6/dist-packages/connexion/apps/flask_app.py", line 138, in default
    return json.JSONEncoder.default(self, o)
  File "/usr/local/lib/python3.6/dist-packages/flask/json/__init__.py", line 100, in default
    return _json.JSONEncoder.default(self, o)
  File "/usr/lib/python3.6/json/encoder.py", line 180, in default
    o.__class__.__name__)
TypeError: Object of type 'State' is not JSON serializable

Docker build fails if directory name contains spaces

If there is a space anywhere in the parent directory path, the build will fail.

Example: Assume code has been cloned into the directory /home/user/VA Serving, build will fail as follows

$ docker/build.sh
<snip>
Successfully tagged video-analytics-serving-gstreamer-base:latest
cp: target 'Serving/video-analytics-serving/docker/Dockerfile.env' is not a directory

Workaround is to remove spaces from directory name.

docker/build.sh fails if UID is not 1000

On a system when UID is not 1000 (i.e. a second user was created and that account was used to run build), the build will fail.

Run docker/build.sh and you will get the following error

ERROR: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/.local'
Check the permissions.

To workaround the problem, modify tools/model_downloader/model_downloader.sh so that user is root

$SOURCE_DIR/docker/run.sh --user root ...

Run script fails to detect GPU if UID > 1001

The container has two users with permission to access GPU: openvino (UID 1000) and vaserving (UID 1001). The run scripts takes UID from host to avoid volume mounting permission issues, so if this UID > 1001, the container cannot access GPU.

Under this condition, if a request sets device to "GPU", the server will show an error as follows:

$ docker/run.sh
Found /dev/dri - enabling for GPU
<snip>
{"levelname": "ERROR", "asctime": "2021-02-17 02:44:47,904", "message": "Error on Pipeline 1: gst-library-error-quark: base_inference plugin intitialization failed (3): /root/gst-video-analytics/gst/inference_elements/base/inference_singleton.cpp(137): acquire_inference_instance (): /GstPipeline:pipeline7/GstGvaDetect:detection:\nFailed to construct OpenVINOImageInference\n\tFailed to create plugin /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libclDNNPlugin.so for device GPU\nPlease, check your environment\n[CLDNN ERROR]. clGetPlatformIDs error -1001\n\n", "module": "gstreamer_pipeline"}

Workaround is to ensure user has UID <= 1001.

Duplicate MQTT messages with same timestamp

FrameTimestampIssue.zip
When the attached python code was run, we experienced 3 pairs of messages with the same timestamps. You can see these in the mqtt_logs.text at lines: 13/14, 21/22 and 25/26.
Wondering what is happening here, my understanding is that each MQTT message should come with unique and incremental timestamp. Is this an instance where both messages were generated close to each other so in the time resolution they fall in to the same ticker?

Feature Request: API Client Generation with openapi-generator

Hello,

I was trying to get an API Client for golang generated from the OpenAPI specification file video-analytics-serving.yaml. You can run the tool with Docker:

docker run --rm -v ${PWD}:/local openapitools/openapi-generator-cli \
  generate -i https://raw.githubusercontent.com/intel/video-analytics-serving/v0.3.0-alpha/vaserving/rest_api/video-analytics-serving.yaml \
  -g go -o /local/out/go

I already run into a problem with the oneOf tag as described here OpenAPITools/openapi-generator#15 (comment).

Anyways, as I want to just run the example with UriSource and FileDestination, I manually updated the generated API client. It worked. I could get the Models at least:

	conf := openapi.NewConfiguration()
	conf.BasePath = "http://localhost:8080"
	client := openapi.NewAPIClient(conf)

	models, res1, err1 := client.DefaultApi.ModelsGet(nil)
        log.PrintLn(models)

But when I try to get the pipelines, it fails to parse the response. And I think the reason behind that is the Pipeline_parameters definition in the vide-analytics-serving.yaml file.

The definition is more or less empty:

    Pipeline_parameters:
      example:
        default: {}
      properties:
        default: {}

So, when the response containing the following fields arrive to my generated client:

{
        "description": "Object Detection Pipeline",
        "name": "object_detection",
        "parameters": {
            "properties": {
                "cpu-throughput-streams": {
                    "element": "detection",
                    "maximum": 4294967295,
                    "minimum": 0,
                    "type": "integer"
                },
                "inference-interval": {
                    "default": 1,
                    "element": "detection",
                    "maximum": 4294967295,
                    "minimum": 1,
                    "type": "integer"
                },
                "n-threads": {
                    "default": 1,
                    "element": "videoconvert",
                    "type": "integer"
                },
                "nireq": {
                    "default": 2,
                    "element": "detection",
                    "maximum": 64,
                    "minimum": 1,
                    "type": "integer"
                }
            },
            "type": "object"
        },
        "type": "GStreamer",
        "version": "1"
    }

My API Client crash saying:

2020/05/28 16:32:53 json: cannot unmarshal string into Go struct field Pipeline.parameters of type openapi.PipelineParameters

So, this is not a bug, but a request to complete the Pipeline_parameters definition so automatic generated API clients works out of the box :) (for Go at least)

Any opinion?

Memory leaks

Hello!

does U have examined memory leaks by using python modules?
I have tried a script with different count of python modules in different combinations, every time i get memory leaks and after some time script stops to run without stopping a pipeline itself. Here I attach a picture with statistic Every time that the memory goes down - is the restart.
I used also gc.collect() and multiple del XXXX in python code to escape it but...
in debug mode for gc.collect I see multiple frame, tuple, query etc objects
I have tried many combinations with modules my own or standard
Thank U

Unable to pass data from one pipeline to another

Hello,
I am planning to separate a VAS pipeline into two or more parts with different server ports. I would like to have each pipeline handle different process stages instead of having one single process. I used the VAS image to create 2 separate docker containers with its custom pipelines. So in order to pass the data from one pipeline to another, I decided to use gstreamer elements, tcpclientsink for sending data and tcpserversrc for receiving data. These are the elements used for each pipelines

VAS-Server A

{
  "type": "GStreamer",
  "template": ["urisourcebin name=source ! decodebin  ! queue ! tcpclientsink port=80802"],
  "description": "Pipeline A",
  "parameters": {
    "type": "object",
    "properties": {}
  }
}

VAS-Server B:

{
  "type": "GStreamer",
  "template": [
    "tcpserversrc port=80801 ",
    " ! gvadetect model={models[object_detection][1][network]} name=detection",
    " ! gvametaconvert name=metaconvert ! queue ! gvametapublish name=destination",
    " ! gvawatermark ! videoconvert ! appsink"
  ],
  "description": "Pipeline B",
  "parameters": {
    "type": "object",
    "properties": {...}
  }
}

I tried sending a POST request to Server A to see if the pipeline will work and transfer data over to Server B. But, I am met with this error:

{"levelname": "INFO", "asctime": "2021-03-12 08:06:19,926", "message": "Setting Pipeline 1 State to RUNNING", "module": "gstreamer_pipeline"}
{"levelname": "ERROR", "asctime": "2021-03-12 08:06:19,928", "message": "Error on Pipeline 1: gst-resource-error-quark: Error while sending data to \"localhost:80802\". (10): ../gst/tcp/gsttcpclientsink.c(218): gst_tcp_client_sink_render (): /GstPipeline:pipeline2/GstTCPClientSink:tcpclientsink1:\nOnly 234192 of 345600 bytes written: Error sending data: Connection reset by peer", "module": "gstreamer_pipeline"}

How can I resolve this? Thanks.

Prevent pipeline instances from resetting

I noticed that every time you restart the api server, the pipeline instances from the last session will all be lost and get reset to zero. Is there a way to create some sort of a storage for saving pipeline instances so you can have access to it anytime without having to lose them every time you restart the server?

Not able to use stored gallery folder while running gstreamer with gvapython

Hi @nnshah1

Referring to our previous conversation Updated Object Identification Sample, I need help in related issue, where I am doing face recognition with live camera streaming, When running with enrolling True in the vas_identify.py, the gallery folder gets created into the analytics_docker environment, so its able to create new gallery folder, but When I am giving a prebuilt gallery folder (and copied it to the docker environment by making changes into the Dockerfile) which contains features folder and gallery.json with manual labels to the faces, somehow its not reaching to the location of gallery folder, because in the GUI view of smart city I am getting UNKNOWN label even for the faces which have labels.

But the same thing works fine with Video-analytics -serving, there its able to generate images with given labels to the faces enrolled.
I think the vas_identify.py somehow missing that part which object identification.py is taking care of in VAserving.

Please help me with the issue.

opencv-contrib-python

i need to use opencv-contrib-python before I used OpenVisulaCloud to use a standard opencv-contrib-python but now i can t
Thank U

Some docker hub base images do not support audio detection

GStreamer framework base images are expected to include the
audio detection inference plugin libgstaudioanalytics.so. If this plugin is missing the audio detection pipeline will not load, see error message below, and the Video Analytics Serving service will not start.

{"levelname": "ERROR", "asctime": "2020-08-26 01:49:40,114", "message": "Failed to Load Pipeline from: /home/video-analytics-serving/pipelines/audio_detection/1/pipeline.json", "module": "pipeline_manager"}

Currently this plugin is only present in the DL Streamer audio preview so will be not be in any base images obtained from dockerhub.
Thus GStreamer images based on Open Visual Cloud or OpenVINO will exhibit this problem.

As a workaround you can configure the service to ignore initialization errors when you start it.

docker/run.sh -v /tmp:/tmp -e IGNORE_INIT_ERRORS=True

glib failure in gstreamer_pipeline.py

After running 9 minutes,

PipelineStatus(avg_fps=43.18830125933299, avg_pipeline_latency=3.0989729501227146, elapsed_time=427.7101671695709, id=1, start_time=1590402984.9836986, state=<State.RUNNING: 2>)
Fatal Python error: deallocating None

Thread 0x00007ff174b7a700 (most recent call first):

Current thread 0x00007ff15ffff700 (most recent call first):

Thread 0x00007ff176b7e700 (most recent call first):
File "/usr/lib/python3/dist-packages/gi/overrides/GLib.py", line 585 in run
File "/home/vaserving/gstreamer_pipeline.py", line 45 in gobject_mainloop
File "/usr/lib/python3.6/threading.py", line 864 in run
File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007ff19e40b700 (most recent call first):
File "/usr/lib/python3/dist-packages/watchdog/utils/delayed_queue.py", line 65 in get
File "/usr/lib/python3/dist-packages/watchdog/observers/inotify_buffer.py", line 43 in read_event
File "/usr/lib/python3/dist-packages/watchdog/observers/inotify.py", line 129 in queue_events
File "/usr/lib/python3/dist-packages/watchdog/observers/api.py", line 146 in run
File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007ff19ec0c700 (most recent call first):
File "/usr/lib/python3/dist-packages/watchdog/observers/inotify_c.py", line 296 in read_events
File "/usr/lib/python3/dist-packages/watchdog/observers/inotify_buffer.py", line 59 in run
File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007ff19f40d700 (most recent call first):
File "/usr/lib/python3.6/threading.py", line 299 in wait
File "/usr/lib/python3.6/threading.py", line 551 in wait
File "/home/runva.py", line 123 in loop
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56 in run
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 69 in _worker
File "/usr/lib/python3.6/threading.py", line 864 in run
File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007ff19fc0e700 (most recent call first):
File "/usr/lib/python3.6/threading.py", line 299 in wait
File "/usr/lib/python3.6/queue.py", line 173 in get
File "/usr/lib/python3/dist-packages/watchdog/observers/api.py", line 360 in dispatch_events
File "/usr/lib/python3/dist-packages/watchdog/observers/api.py", line 199 in run
File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007ff1a040f700 (most recent call first):
File "/usr/lib/python3.6/threading.py", line 1072 in _wait_for_tstate_lock
File "/usr/lib/python3.6/threading.py", line 1056 in join
File "/home/rec2db.py", line 54 in loop
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56 in run
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 69 in _worker
File "/usr/lib/python3.6/threading.py", line 864 in run
File "/usr/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/usr/lib/python3.6/threading.py", line 884 in _bootstrap

Thread 0x00007ff1a4da4b80 (most recent call first):
File "/usr/lib/python3.6/threading.py", line 1072 in _wait_for_tstate_lock
File "/usr/lib/python3.6/threading.py", line 1056 in join
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 152 in shutdown
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 611 in exit
File "/home/count-entrance.py", line 32 in connect
File "/home/count-entrance.py", line 77 in

Build fails on Windows 10 and WSL

Below error is observed while executing the command ./docker/build.sh after cloning the repo on my Windows 10 machine running WSL 1.0.

can't open file '/home/video-analytics-serving/tools/model_downloader': [Errno 2] No such file or directory

More details on the error and screenshots can be found here dlstreamer/dlstreamer#151 and dlstreamer/dlstreamer#151 (comment)

Opening the issue here because it is related to video-analytics-serving repo.

pipeline don t reaches cameras

I have rebuild docker with ./docker/build.sh --framework gstreamer --base openvino/ubuntu18_runtime:2020.4 and I tried
with ./docker/build.sh --framework gstreamer --base openvino/ubuntu18_runtime:2021.1

docker starts with

/home/imt/area_overwatch/grabbing/run.sh -v /tmp:/tmp \
    --name vaserving \
    --network host \
    -e DISPLAY=$DISPLAY \
    -v "$HOME/.Xauthority:/root/.Xauthority:rw" \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    --device /dev/dri \
    --models /home/imt/area_overwatch/grabbing/samples/models/intel/ \
    --pipelines /home/imt/area_overwatch/grabbing/pipelines

but I don t see cameras from the docker connections get always ERROR
(I tried also without network host with -p 554:554 )
Thank U

Default GStreamer docker image does not build

The DL Streamer docker image no longer builds due to versioning issues in the OpenCV python libraries. You will see an error like this

$ docker/build.sh
<snip>
Collecting opencv-python
  Downloading https://files.pythonhosted.org/packages/77/f5/49f034f8d109efcf9b7e98fbc051878b83b2f02a1c73f92bbd37f317288e/opencv-python-4.4.0.42.tar.gz (88.9MB)
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-n761d8ed/opencv-python/setup.py", line 9, in <module>
        import skbuild
    ModuleNotFoundError: No module named 'skbuild'

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-n761d8ed/opencv-python/
The command '/bin/sh -c apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y -q --no-install-recommends     libusb-1.0-0-dev libboost-all-dev libgtk2.0-dev python-yaml         clinfo libboost-all-dev libjson-c-dev     build-essential cmake ocl-icd-opencl-dev wget gcovr vim git gdb ca-certificates libssl-dev uuid-dev     libgirepository1.0-dev     python3-dev python3-wheel python3-pip python3-setuptools python-gi-dev python-yaml         libglib2.0-dev libgmp-dev libgsl-dev gobject-introspection libcap-dev libcap2-bin gettext         libx11-dev iso-codes libegl1-mesa-dev libgles2-mesa-dev libgl-dev gudev-1.0 libtheora-dev libcdparanoia-dev libpango1.0-dev libgbm-dev libasound2-dev libjpeg-dev     libvisual-0.4-dev libxv-dev libopus-dev libgraphene-1.0-dev libvorbis-dev         libbz2-dev libv4l-dev libaa1-dev libflac-dev libgdk-pixbuf2.0-dev libmp3lame-dev libcaca-dev libdv4-dev libmpg123-dev libraw1394-dev libavc1394-dev libiec61883-dev     libpulse-dev libsoup2.4-dev libspeex-dev libtag-extras-dev libtwolame-dev libwavpack-dev         libbluetooth-dev libusb-1.0.0-dev libass-dev libbs2b-dev libchromaprint-dev liblcms2-dev libssh2-1-dev libdc1394-22-dev libdirectfb-dev libssh-dev libdca-dev     libfaac-dev libfdk-aac-dev flite1-dev libfluidsynth-dev libgme-dev libgsm1-dev nettle-dev libkate-dev liblrdf0-dev libde265-dev libmjpegtools-dev libmms-dev     libmodplug-dev libmpcdec-dev libneon27-dev libopenal-dev libopenexr-dev libopenjp2-7-dev libopenmpt-dev libopenni2-dev libdvdnav-dev librtmp-dev librsvg2-dev     libsbc-dev libsndfile1-dev libsoundtouch-dev libspandsp-dev libsrtp2-dev libzvbi-dev libvo-aacenc-dev libvo-amrwbenc-dev libwebrtc-audio-processing-dev libwebp-dev     libwildmidi-dev libzbar-dev libnice-dev libxkbcommon-dev         libmpeg2-4-dev libopencore-amrnb-dev libopencore-amrwb-dev liba52-0.7.4-dev         libva-dev libxrandr-dev libudev-dev         && rm -rf /var/lib/apt/lists/*     && python3.6 -m pip install numpy opencv-python pytest' returned a non-zero code: 1

To work around this, you need to patch DL Streamer source and then re-build as follows. Replace /path/to with a directory of your choice.

$ git clone -b preview/audio-detect https://github.com/opencv/gst-video-analytics.git /path/to/gst-video-analytics
$ cd /path/to/gst-video-analytics
$ wget -q -O - https://patch-diff.githubusercontent.com/raw/opencv/gst-video-analytics/pull/100.patch | git apply -
$ cd /path/to/video-analytics-serving
$ docker/build.sh --base-build-context /path/to/gst-video-analytics --base-build-dockerfile /path/to/gst-video-analytics/docker/Dockerfile

Unable to playback recorded video with inference results displayed

Hello,
I was trying out the record_playback.py script in the /samples folder to record a video input in order to output the metadata.txt which I can then use for playback with inference results displayed . When I playback the video I just recorded without the metadata added, it plays back normally with no issue. This is the command line used:

python3 ./samples/record_playback/record_playback.py --playback --input-video-path ./samples/bottle_detection.mp4

But when I tried playing back the video with the metadata added, I get this error

Segmentation fault

Using this command line:

python3 ./samples/record_playback/record_playback.py --playback --input-video-path ./samples/bottle_detection.mp4 --metadata-file-path ./metadata.txt

I am using X server to display the playback video from Linux subsystem in windows. Anyway to resolve this?

Error running run.sh on branch v0.4.0-beta

After following the instructions on the readme, I encountered an error when running ./docker/run.sh -v /tmp:/tmp, I get runtimr errors complaining about missing modules.

This is after I ran the model-downloader. I experienced similar errors when I ran run.sh without first running model-downloader, if running the downloader is necessary then that step should be added to the Getting Started steps in the readme also.

To summarize

  • error when running run.sh before and after running model-downloader
  • if running the downloader is required it should be added to the getting started section in the readme

Here is the log output,

Running Video Analytics Serving Image: 'video-analytics-serving-gstreamer'
   Models: ''
   Pipelines: ''
   Framework: 'gstreamer'
   Environment: ''
   Volume Mounts: '-v /tmp:/tmp '
   Mode: 'SERVICE'
   Ports: '-p 8080:8080 '
   Name: 'video-analytics-serving-gstreamer'
   Network: ''
   Entrypoint: ''
   EntrypointArgs: '
   User: ''
   Devices: '--device /dev/dri '

+ docker run -it --rm -v /tmp:/tmp --device /dev/dri -p 8080:8080 --name video-analytics-serving-gstreamer video-analytics-serving-gstreamer
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "========================", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "Options for vaserving.py", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "========================", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "port == 8080", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "framework == gstreamer", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "pipeline_dir == pipelines", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "model_dir == models", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "network_preference == {}", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "max_running_pipelines == 1", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "log_level == INFO", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "config_path == /home/video-analytics-serving/vaserving/..", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "ignore_init_errors == False", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,865", "message": "========================", "module": "vaserving"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,866", "message": "==============", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,866", "message": "Loading Models", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,866", "message": "==============", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,866", "message": "Loading Models from Path /home/video-analytics-serving/models", "module": "model_manager"}
{"levelname": "ERROR", "asctime": "2020-12-18 11:54:26,866", "message": "Error Loading Model object_detection from: /home/video-analytics-serving/models: object_detection/1 is missing Model-Proc or Network", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,866", "message": "Loading Model: audio_detection version: 1 type: IntelDLDT from {'FP32': '/home/video-analytics-serving/models/audio_detection/1/FP32/aclnet.xml', 'FP16': '/home/video-analytics-serving/models/audio_detection/1/FP16/aclnet.xml', 'model-proc': '/home/video-analytics-serving/models/audio_detection/1/aclnet.json'}", "module": "model_manager"}
{"levelname": "ERROR", "asctime": "2020-12-18 11:54:26,866", "message": "Error Loading Model face_detection_retail from: /home/video-analytics-serving/models: face_detection_retail/1 is missing Model-Proc or Network", "module": "model_manager"}
{"levelname": "ERROR", "asctime": "2020-12-18 11:54:26,867", "message": "Error Loading Model emotion_recognition from: /home/video-analytics-serving/models: emotion_recognition/1 is missing Model-Proc or Network", "module": "model_manager"}
{"levelname": "ERROR", "asctime": "2020-12-18 11:54:26,867", "message": "Error Loading Model landmarks_regression from: /home/video-analytics-serving/models: landmarks_regression/1 is missing Model-Proc or Network", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,867", "message": "========================", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,867", "message": "Completed Loading Models", "module": "model_manager"}
{"levelname": "INFO", "asctime": "2020-12-18 11:54:26,867", "message": "========================", "module": "model_manager"}
{"levelname": "ERROR", "asctime": "2020-12-18 11:54:26,867", "message": "Error Starting VA Serving: Error Initializing Models", "module": "__main__"}

Opencv contrib

Please tell me the changes I need to make docker image with opencv contrib library

Thank U

Using gvapython in VA Serving Pipelines

I try to store some data directly from python script, set in gstreamer pipeline. but when i try to save the script stops.
Please tell me how to do it.
Thank u

Feature Request: What is also needed

Thank U for wonderful made product!

really are extra needed some extra:

there is no possibility to enumerate all pipes having status via RESTapi
for example it will be great to get with GET /pipelines/{name}/{version} list of ids to get status or make an separate query to read list of ids

it will be great to have a /reload and /exit command

also will be grate to add to status data CPU usage (via psutil)

Thank U

mqtt

I am trying to connect local mqtt server via metapublishing, but messages are not comming
if I use "file" option i get normal output
if I try to map 1883 port by docker starting i get
docker: Error response from daemon: driver failed programming external connectivity on endpoint vaserving (c8500fbaa086ec460d88dcda03bf3e2d18a5fb81b1e4fa649a7aec756766addb): Error starting userland proxy: listen tcp 0.0.0.0:1883: bind: address already in use.

Unpredictable delay in receiving mqtt messages

Hi Team,
I am using using rtsp as input source and MQTT as destination in the pipeline. The input video file is 30FPS. I am using vlc to simulate the rtsp stream. The delay between frames is calculated. It is around 32 ms. The delay in MQTT messages is unpredictable. The delay in MQTT messages frequently changes from 0-150 ms. Since the frame delay is 32ms, how the delay in MQTT is less than 33 ms. Please help me in understanding this.

The code snippet used for calculation is below:

msg_t1 = 0
def on_message(client, userdata, msg):
global msg_t1
message = str(msg.payload.decode("utf-8"))
message = json. loads(message)
delay = (time.time() - msg_t1) * 1000
print("delay in ms = ", delay)
msg_t1 = time.time()

Thank you
Venkat

Some errors about sample request.

Sample request:

I’m trying to use your GitHub repository :video-analytics-serving. But the service shows some error when I try to run the sample code.
My environment is:
Intel Xeon Phi CPU7285; Ubuntu 18.04.3; Docker 19.03.2 build 6a30dfc.
0. It seems that I've succesfully built the gstreamer and ffmpeg version from your dockerfile.

REPOSITORY                               TAG                 IMAGE ID            CREATED             SIZE
video_analytics_serving_gstreamer        latest              9f6cc71d9875        6 days ago          1.14GB
video_analytics_serving_ffmpeg           latest              63c8771c70f3        6 days ago          457MB
video_analytics_serving_ffmpeg_base      latest              dd5e7f92758e        6 days ago          220MB
<none>                                   <none>              c3532d0ed62c        6 days ago          4.44GB
video_analytics_serving_gstreamer_base   latest              15bbd8f30cfc        6 days ago          973MB
<none>                                   <none>              8668928d50bd        6 days ago          7.44GB
ubuntu                                   18.04               775349758637        4 weeks ago         64.2MB
<none>                                   <none>              eb772b2d5623        7 weeks ago         852MB
  1. When I run the Gstreamer and the sample post:
curl 10.239.43.221:8080/pipelines/emotion_recognition/1 -X POST -H 'Content-Type: application/json' -d '{ "source": { "uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/head-pose-face-detection-male.mp4?raw=true", "type": "uri" }, "destination": { "type": "file", "path": "/tmp/results1.txt", "format":"stream"}}'

I can find results1.txt at /tmp, while noting is written into it.
The status is ‘QUEUED’ for several minutes, then the docker will corrupt.

curl 10.239.43.221:8080/pipelines/emotion_recognition/1/1/status -X GET
{
  "avg_fps": 0,
  "elapsed_time": 128.1189136505127,
  "id": 1,
  "start_time": 1574865363.3796282,
  "state": "QUEUED"
}
(gst-plugin-scanner:12): GStreamer-WARNING **: 13:25:53.775: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstpython.cpython-36m-x86_64-linux-gnu.so': libpython3.6m.so.1.0: cannot open shared object file: No such file or directory
{"levelname": "ERROR", "asctime": "2019-11-27 13:25:55,487", "message": "Error loading FFmpeg: ffmpeg not installed\n", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 13:25:55,488", "message": "Loading Pipelines from Config Path /home/video-analytics/app/server/openapi_server/../../../pipelines/gstreamer", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 13:25:55,697", "message": "Completed Loading Pipelines", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 13:25:55,698", "message": "Loading Models from Path /home/video-analytics/models", "name": "ModelManager"}
{"levelname": "INFO", "asctime": "2019-11-27 13:25:55,714", "message": "Completed Loading Models", "name": "ModelManager"}
{"levelname": "INFO", "asctime": "2019-11-27 13:25:56,279", "message": "Starting Tornado Server on port: 8080", "name": "main"}
{"levelname": "INFO", "asctime": "2019-11-27 13:26:57,836", "message": "Creating Instance of Pipeline emotion_recognition/1", "name": "PipelineManager"}
{"levelname": "ERROR", "asctime": "2019-11-27 13:26:57,937", "message": "Error on Pipeline <built-in function id>: gst-resource-error-quark: metapublish initialization failed (2)", "name": "GSTPipeline"}
{"levelname": "INFO", "asctime": "2019-11-27 13:27:18,242", "message": "Creating Instance of Pipeline emotion_recognition/1", "name": "PipelineManager"}
./docker-entrypoint.sh: line 3:     6 Segmentation fault      (core dumped) python3 -m openapi_server $@

(gst-plugin-scanner:12): GStreamer-WARNING **: 14:35:49.573: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstpython.cpython-36m-x86_64-linux-gnu.so': libpython3.6m.so.1.0: cannot open shared object file: No such file or directory
{"levelname": "ERROR", "asctime": "2019-11-27 14:35:51,272", "message": "Error loading FFmpeg: ffmpeg not installed\n", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 14:35:51,273", "message": "Loading Pipelines from Config Path /home/video-analytics/app/server/openapi_server/../../../pipelines/gstreamer", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 14:35:51,474", "message": "Completed Loading Pipelines", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 14:35:51,475", "message": "Loading Models from Path /home/video-analytics/models", "name": "ModelManager"}
{"levelname": "INFO", "asctime": "2019-11-27 14:35:51,490", "message": "Completed Loading Models", "name": "ModelManager"}
{"levelname": "INFO", "asctime": "2019-11-27 14:35:52,070", "message": "Starting Tornado Server on port: 8080", "name": "main"}
{"levelname": "INFO", "asctime": "2019-11-27 14:36:03,093", "message": "Creating Instance of Pipeline emotion_recognition/1", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 14:38:19,886", "message": "Creating Instance of Pipeline emotion_recognition/1", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-11-27 14:39:05,668", "message": "Creating Instance of Pipeline object_detection/1", "name": "PipelineManager"}
./docker-entrypoint.sh: line 3:     6 Segmentation fault      (core dumped) python3 -m openapi_server $@
  1. Then I run ffmpeg as pipelines. The docker doesn’t corrupt, but I still cannot get the result.
    The status is ‘RUNNING’ for about 10 seconds, then it will be ‘ERROR’.
face@fly:/tmp$ curl 10.239.43.221:8080/pipelines/emotion_recognition/1/1/status -X GET
{
  "avg_fps": 0,
  "elapsed_time": 12.269392251968384,
  "id": 1,
  "start_time": 1574862817.2551162,
  "state": "RUNNING"
}
face@fly:/tmp$ curl 10.239.43.221:8080/pipelines/emotion_recognition/1/1/status -X GET
{
  "avg_fps": 0,
  "elapsed_time": 14.299017429351807,
  "id": 1,
  "start_time": 1574862817.2551162,
  "state": "ERROR"
}
  1. At last, I get into the docker, and just do as the /home/video-analytics/samples/README.md.
    After python3 sample.py, it returns some error as below:
root@d8a13277aeb8:/home/video-analytics/samples# python3 sample.py
Launching with options:
Namespace(destination='/home/video-analytics/samples/results.txt', pipeline='object_detection', repeat=1, source='file:///home/video-analytics/samples/pinwheel.ts', verbose=True)
Starting Pipeline: http://localhost:8080/pipelines/object_detection/1
Traceback (most recent call last):
  File "sample.py", line 215, in <module>
    main()
  File "sample.py", line 212, in main
    launch_pipelines(options)
  File "sample.py", line 198, in launch_pipelines
    verbose=options.verbose)
  File "sample.py", line 81, in read_detection_results
    with open(destination) as detection_file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/video-analytics/samples/results.txt'

Would you help me to resolve these issues?

dconf permission error messages

When built with OpenVINO base image the following error message is seen during REST service pipeline processing. Service operation is unaffected.

(python3:1): dconf-CRITICAL **: 13:20:53.555: unable to create directory '/home/openvino/.cache/dconf': Permission denied.  dconf will not work properly.

Plot the bounding box using timestamp

Hi,

I am using RTSP as input and MQTT as output in the pipeline. RTSP stream is simulated using vlc from the recorded video files. The input stream is received in the code using OpenCV. When I run the pipeline, I receive the messages to MQTT callback. I would like to use the timestamp from the MQTT message and overlay the bounding box co-ordinates on the frame.
What is the timestamp parameter that is received in the MQTT message?
How can I match the timestamp from the MQTT message and timestamp of video stream?
Any help is welcomed

Thanks,
Ram prasad

Kafka and mqtt examples?

Look through the repo, but couldn't find any reference for how to use kafka and mqtt with the service. When trying to run with the following configurations (please excuse the Python scaffolding, but one should get the idea):

{
    "source": {"uri": self.source, "type": "uri"},
    "destination": {
        "type": "kafka",
        "host": f"{protocol}://{KAFKA_HOSTNAME}",
        "topic": f"/VAS/{self.uid}",
    },
}

Where protocol is either http or kafka, I get the following error as a log in the video-analytics-serving:

video-analytics-service    | %4|1593223459.081|BROKER|rdkafka#producer-1| [thrd:app]: Broker name "kafka://kafka:9092" parse error: unsupported protocol "KAFKA"

or

video-analytics-service %4|1593222222.849|BROKER|rdkafka#producer-1| [thrd:app]: Broker name "http://kafka:9092" parse error: unsupported protocol "HTTP"

Error opening file /samples/bottle_detection.mp4: No such file or directory

I am trying to POST the sample video from local file to the object detection endpoint but I get this error.

{"levelname": "ERROR", "asctime": "2021-02-12 11:31:56,109", "message": "Error on Pipeline 1: gst-resource-error-quark: Resource not found. (3): ../gst/gio/gstgiosrc.c(324): gst_gio_src_get_stream (): /GstPipeline:pipeline7/GstURISourceBin:source/GstGioSrc:source:\nCould not open location file:///samples/bottle_detection.mp4 for reading: Error opening file /bottle_detection.mp4: No such file or directory", "module": "gstreamer_pipeline"}

This json request that I have sent.

{
  "source": {
    "uri": "file://samples/bottle_detection.mp4",
    "type": "uri"
  },
  "destination": {
    "type": "file",
    "path": "/tmp/test.json",
    "format": "json-lines"
  }
}

Is there a way to resolve it?

Update required in requirements.txt

(For tag: v0.1.0-alpha)
Connexion package is to be updated from 2.0.2 to 2.6.0 in requirements.txt file (https://github.com/intel/video-analytics-serving/blob/v0.1.0-alpha/service/app/server/requirements.txt).
Werkzeug python package in updated on 7th Feb, this package in used by connexion internally.
The image will build without any error, but the container won't run without this change. Just replace first line in requirements.txt file with "connexion == 2.6.0".

build.sh not working out of the box.

Will be investigating this further, but as of now, the instructions given in the README.md and samples/README.md do not seem to work out-of-the-box:

README.md

Building

To get started, build the service as a standalone component execute the following command

./docker/build.sh
samples/README.md
  1. Build the service container with sample pipelines:
    ~/video-analytics-serving$ ./docker/build.sh
    

When running build.sh the following error is thrown:

./docker/build.sh: line 165: ${MODELS^^}: bad substitution

I'm assuming necessary flags aren't being set, but in such case, I believe that the README's must be updated. Will update this thread further into the night after some code-reading.

Intel GPUs is not working in openvisual cloud

I have tried using GPU(Intel® UHD Graphics 630 (CFL GT2)) and processor(Intel® Core™ i7-8700 CPU @ 3.20GHz × 12) with video-analytics-serving(change docker image from Xeon to Xeone3) and it is working, but when i tired to work with smart city sample on GPU it is not working. I also raised the issue here:- OpenVisualCloud/Smart-City-Sample#736
I also tried by changing the VA-Serving version from 0.3.0-alpha to 0.3.1.1-alpha in the smart city sample still it is not working

Updated Object Identification Sample

@nnshah1 hey!
Working on the Smart-City-Cloud, I realised that it would be beneficial if I could understand the making of platforms separately, I see that making the pipeline environment inside the smartcitysample doesn't take that much time to build, while this g-streamer container takes more than 5 hours to build, why is that and is there any way to avoid this?

Also for the re-identification models, I observe that we are using !gvaclassify , but can you make me understand that how should one make the .json file for the model? for eg, I see that in the emotion-recoginition model, you have used this input preproc https://github.com/intel/video-analytics-serving/blob/59fdcba3e7b631f391cf5654b30f78d56585411b/models/emotion_recognition/1/emotions-recognition-retail-0003.json#L3-L8
Can you tell me which layer_name are we referencing this to? ( I believe to the previous model)

Also, in the output_postpoc
https://github.com/intel/video-analytics-serving/blob/59fdcba3e7b631f391cf5654b30f78d56585411b/models/emotion_recognition/1/emotions-recognition-retail-0003.json#L9-L23
We have used some sort of converter, what is this.

My aim is to use the pipeline to run the person reidentification models. If you can help me understand how can I do that

gvawatermark not displaying overlay bounding box on output video

Hello,

I have been experiencing difficulties in outputting a video with bounding box. I created a custom test pipeline with the gvawatermark element added to the pipeline template to add overlay bounding box to the video output. However, after multiple element arrangement within the template, I was still unable to get the output desired.

This is the template format I used:

{
	"type": "GStreamer",
	"template": ["urisourcebin name=source ! tee name=t ! queue",
				" ! decodebin ! video/x-raw ! videoconvert name=videoconvert ! queue",
				" ! gvadetect model={models[object_detection][1][network]} name=detection",
				" ! gvametaconvert name=metaconvert ! queue ! gvametapublish name=destination",
				" ! gvawatermark ! appsink name=appsink",
				" t. ! queue ! qtdemux ! splitmuxsink name=splitmuxsink"
				],
	"description": "Object Detection TEST PIPELINE",

}

Update samples/README.md

In samples/README.md, the step
# ./sample.py --retries 3
should be
# ./sample.py --repeat 3
Since, according to sample.py, there is no retries option.

Build error with Edgex bridge

Followed the instructions here to build edgex - https://github.com/intel/video-analytics-serving/tree/master/samples/edgex_bridge
Getting the following build error -
$ ./samples/edgex_bridge/fetch_edgex.sh
./samples/edgex_bridge/fetch_edgex.sh: line 6: cd: /home/nesubuntu207/Neethu/video-analytics-serving/samples/edgex_bridge/../../edgex: No such file or directory
Working folder: /home/nesubuntu207/Neethu/video-analytics-serving
3e2b35463a7ec6c6031721aeecb73f074c1f4926618851c9285d16ab174a00b0
dev_mqtt
Cloning into 'developer-scripts'...
remote: Enumerating objects: 36, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (27/27), done.
remote: Total 2054 (delta 18), reused 9 (delta 7), pack-reused 2018
Receiving objects: 100% (2054/2054), 547.98 KiB | 1.40 MiB/s, done.
Resolving deltas: 100% (1417/1417), done.
./samples/edgex_bridge/fetch_edgex.sh: line 22: cd: ./developer-scripts/releases/nightly-build/compose-files/source: No such file or directory
make: *** No rule to make target 'compose'. Stop.
cp: cannot stat '../docker-compose-nexus-no-secty-mqtt.yml': No such file or directory
Next steps:
./docker/build.sh
./docker/run.sh --dev
python3 samples/edgex_bridge/edgex_bridge.py --topic object_events --generate
docker-compose up -d
python3 samples/edgex_bridge/edgex_bridge.py --topic object_events

ONNX build error while building VAS

Unable to build ONNX package while running the build command for VAS using following
VAS_VERSION=v0.3.1-alpha
vas:
rm -rf video-analytics-serving &&
git clone https://github.com/intel/video-analytics-serving &&
cd video-analytics-serving/docker &&
git checkout ${VAS_VERSION} &&
./build.sh --framework gstreamer
cd ../..
docker build -t video-analytic:paho .

Attached is the error log.
Tried on two different linux machines running ubuntu 18 & 20. Getting the same error.
error_log.txt

time stamp issue

vaserving/gstreamer_pipeline.py, line 274:
self._stream_base = times["segment.time"]

This line should be self._stream_base=times["stream_time"]
times["segment.time"] and times["stream_time"] are in different time units that it does not make sense to do subtraction as in line 285:

adjusted_time = self._real_base +
(times["stream_time"] - self._stream_base)

MQTT

Do we have plans to add support for publishing inference results over MQTT?

Running the service shows some warning/error.

Running the service shows some warning/error.

Information

Hi,

I am very glad to see and try this example!

Before about the the detail, my system has/runs:

  • Ubuntu 18.04.3
  • Docker version 19.03.1, build 74b1e89
  • docker-compose version 1.24.0, build 0aa59064
  • groups bus710 => bus710 : bus710 adm dialout cdrom sudo dip plugdev lpadmin sambashare docker
  • Please let me know if need any additional information.

Detail

I am trying to run the service after a successful building with these output:

$ ./build.sh

...

Step 102/106 : COPY ./models/ /home/video-analytics/models/    
 ---> e04b7172251b
Step 103/106 : COPY ./pipelines /home/video-analytics/pipelines
 ---> 73e90f7f0b9f
Step 104/106 : COPY   docker-entrypoint.sh /home/video-analytics/
 ---> 804bfad669d7
Step 105/106 : WORKDIR /home/video-analytics
 ---> Running in c90498a12cc9
Removing intermediate container c90498a12cc9
 ---> 7169b5e0fa0e
Step 106/106 : ENTRYPOINT ["./docker-entrypoint.sh", "--framework", "ffmpeg" , "--pipeline_dir", "pipelines/ffmpeg"]
 ---> Running in cd52ce752303
Removing intermediate container cd52ce752303
 ---> 2e24b01a93ad
Successfully built 2e24b01a93ad
Successfully tagged video_analytics_serving_ffmpeg:latest

The command to run the service gave me the following error::

$ sudo docker run -e http_proxy=$http_proxy -e https_proxy=$https_proxy -p 8080:8080 -v /tmp:/tmp --rm video_analytics_service_gstreamer

Unable to find image 'video_analytics_service_gstreamer:latest' locally
docker: Error response from daemon: pull access denied for video_analytics_service_gstreamer, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.

However, the run.sh fave me this log and seems okay to me:

$ ./run.sh

(gst-plugin-scanner:12): GStreamer-WARNING **: 20:25:57.235: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstopengl.so': libEGL.so.1: cannot open
 shared object file: No such file or directory

(gst-plugin-scanner:12): GStreamer-WARNING **: 20:25:57.237: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstcurl.so': libcurl-gnutls.so.4: canno
t open shared object file: No such file or directory

(gst-plugin-scanner:12): GStreamer-WARNING **: 20:25:57.273: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstpython.cpython-36m-x86_64-linux-gnu.
so': libpython3.6m.so.1.0: cannot open shared object file: No such file or directory
/home/video-analytics/app/server/openapi_server/__main__.py:43: PyGIWarning: Gst was imported without specifying a version first. Use gi.require_version('Gst', '1.0')
 before import to ensure that the right version gets loaded.
  from gi.repository import Gst, GObject
{"levelname": "ERROR", "asctime": "2019-08-07 20:25:57,357", "message": "Error loading FFmpeg: ffmpeg not installed\n", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-08-07 20:25:57,357", "message": "Loading Pipelines from Config Path /home/video-analytics/app/server/openapi_server/../../commo
n/../../pipelines/gstreamer", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-08-07 20:25:57,359", "message": "Completed Loading Pipelines", "name": "PipelineManager"}
{"levelname": "INFO", "asctime": "2019-08-07 20:25:57,359", "message": "Loading Models from Config Path /home/video-analytics/models", "name": "ModelManager"}
{"levelname": "INFO", "asctime": "2019-08-07 20:25:57,361", "message": "Completed Loading Models", "name": "ModelManager"}
{"levelname": "INFO", "asctime": "2019-08-07 20:25:57,469", "message": "Starting Tornado Server on port: 8080", "name": "main"}

This ticket is posted to share the circumstance only so that we may not fix the issue if it is not a severe issue.

Bad docker after 2021.1

I try to make a new image an use gvatrack there
if I use ./build - the gvatrack is looking for libopencv_video.so.4.4 and does not start
if I use build ./docker/build.sh --framework gstreamer --base openvino/ubuntu18_data_dev I cant start vaserving within the container
if I use ./docker/build.sh --framework gstreamer --base openvisualcloud/xeone3-ubuntu1804-analytics-gst gvatrack looks for

Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstGvaTrack:gvatrack0: tracker intitialization failed
Additional debug info:
/home/gst-video-analytics/gst/elements/gvatrack/gstgvatrack.c(159): gst_gva_track_set_caps (): /GstPipeline:pipeline0/GstGvaTrack:gvatrack0:

dlopen() failed: libopencv_video.so.4.3: cannot open shared object file: No such file or directory

ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...

please tell the right way

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.