Giter Club home page Giter Club logo

acap3-examples's Introduction

Important

This repository has examples for ACAP version 3 SDK, which is not the latest version and might therefore not get updates, bug fixes or answers on tickets as frequently.

It's recommended to move to ACAP version 4 SDK which is the active track:

What is AXIS Camera Application Platform?

AXIS Camera Application Platform (ACAP) is an open application platform that enables members of Axis Application Development Partner (ADP) Program to develop applications that can be downloaded and installed on Axis network cameras and video encoders. ACAP makes it possible to develop applications for a wide range of use cases:

  • Security applications that improve surveillance systems and facilitate investigation.
  • Business intelligence applications that improve business efficiency.
  • Camera feature plug-ins that add value beyond the Axis product's core functionality

ACAP SDK version 3

ACAP is Axis own open platform for applications that run on-board an Axis product. If you are new to ACAP, start with learning more about the platform:

Getting started with the repository

This repository contains a set of application examples which aims to enrich the developers analytics experience. All examples are using Docker framework and has a README file in its directory which shows overview, example directory structure and step-by-step instructions on how to run applications on the camera.

Example applications

Below is the list of examples available in the repository.

  • axevent
    • The example code is written in C which illustrates both how to subscribe to different events and how to send an event.
  • axoverlay
    • The example code is written in C which illustrates how to draw plain boxes and text as overlays in a stream.
  • hello-world
    • The example code is written in C and shows how to build a simple hello world application.
  • licensekey
    • The example code is written in C which illustrates how to check the licensekey status.
  • object-detection
    • The example code focus on object detection, cropping and saving detected objects into jpeg files.
  • object-detection-cv25
    • This example is very similar to object-detection, but is designed for AXIS CV25 devices.
  • reproducible-package
    • An example of how to create a reproducible application package.
  • tensorflow-to-larod
    • This example covers model conversion, model quantization, image formats and custom models.
  • tensorflow-to-larod-artpec8
    • This example is very similar to tensorflow-to-larod, but is designed for AXIS ARTPEC-8 devices.
  • tensorflow-to-larod-cv25
    • This example is very similar to tensorflow-to-larod, but is designed for AXIS CV25 devices.
  • using-opencv
    • This example covers how to build, bundle and use OpenCV with ACAP.
  • utility-libraries
    • These examples covers how to build, bundle and use external libraries with ACAP.
  • vdo-larod
  • vdo-opencl-filtering
    • This example illustrates how to capture frames from the vdo service, access the received buffer, and finally perform a GPU accelerated Sobel filtering with OpenCL.
  • vdostream
    • The example code is written in C which starts a vdo stream and then illustrates how to continuously capture frames from the vdo service, access the received buffer contents as well as the frame metadata.

Docker Hub image

The ACAP SDK image can be used as a basis for custom built images to run your application or as a developer environment inside the container. The image is public and free to use for anyone.

  • ACAP SDK This image is based on Ubuntu and contains the environment needed for building an AXIS Camera Application Platform (ACAP) application. This includes all tools for building and packaging an ACAP 3 application as well as API components (header and library files) needed for accessing different parts of the camera firmware.

Long term support (LTS)

Examples for older versions of the ACAP SDK can been found here:

License

Apache 2.0

acap3-examples's People

Contributors

carlcn avatar corallo avatar daniel-falk avatar ecosvc-dockerhub avatar entor avatar hussanm avatar jeanettl avatar joakimr-axis avatar johan-bjareholt avatar johanxmodin avatar kimraxis avatar lukgiax avatar marbali8 avatar mattias-kindborg-at-work avatar mikaelli-axis avatar mirzamah avatar pataxis avatar petterwa avatar prashanthma29 avatar renovate[bot] avatar shreyasatwork avatar srefshauge avatar theodorag avatar toveb-axis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

acap3-examples's Issues

Creation of new TPU model

Hello Axis,

We are working on an ACAP application for the Q1615 Mark 3, using AI.
We are following the instructions found in the official Axis repo.
https://github.com/AxisCommunications/acap3-examples/tree/master/vdo-larod

We are able to use the simple TPU detection model suggested there (detect.larod at 300x300), however we have not been able to compile our own TPU models with higher resolutions or different aspect ratios.
Is there any recommendation or guidance from the AXIS AI team?

Which AI architecture should we train our model on? (MobileNet V2, MobileDet, YOLO, etc...)
Which compiler and quantization technique?

Thanks and regards,


The issue with TPU model creation is: "edgetpu_compiler fails for higher resolution or 16:9 aspect ratio fully quantized model with
error message - Internal compiler error. Aborting!"
Edge TPU Compiler version: 15.0.340273435
Tensorflow version: 1.15.0
Resolutions of the model: 480 x480, 480x270, 400x225, 340x340
Architecture: SSD_MOBILEDET_EDGETPU - Int8 Quantize Aware Training
Tflite converter used: TOCO

AxHttp - CORS

I'm hosting some CGI endpoints using AxHTTP. I would like to be able to allow for CORS requests, coming from another host. Is there a way to enable this? I'm happy to create a POSTINSTALLSCRIPT to modify the Apache config if that is needed.

OpenCV with OpenCL on Artpec-7

Dear all,

I have an issue with OpenCL with ARTPEC-7 camera. Is there any settings that need to be enabled on camera?

under /dev I see camgpu device. Is that OpenCL device?

Best, Nejc

Accessing camera’s built-in motion detection events in ACAP

Hi,

I'm interested in whether it is possible to get the output of the camera's motion detection events using ACAP in real-time as they occur and use them in a custom application?

I saw the OpenCV example in this repo that performs motion detection using BGS (https://github.com/AxisCommunications/acap3-examples/tree/master/using-opencv), and that is really great, but I'm interested if it is possible to leverage camera’s built-in motion detection to reduce the processing requirements.

Thank you for your help!

tensorflow-to-larod example getting model format incorrect during runtime

Hi all,

We have downloaded the latest tensorflow-to-larod example and completed all build process with no error. We started running the eap file on the camera but received the following errors:

WhatsApp Image 2021-07-14 at 11 35 18

It said he model format was incorrect and couldn't be loaded. "Failed model format type check" error to be exact. We noticed that the older version of this example seemed to do a conversion from converted_model_edgetpu.tflite to converted_model.larod before building the full eap file. This was done with a larod-convert.py script described in the Makefile but we were not able to locate this file inside the docker image/directories.

In this new example, that process seems to be gone. Only converted_model_edgetpu.tflite model file is created and then we ran into this "Failed model format type check" error.

We had other problems with the previous versions of these examples but at least we were able to get the acap up and running on the camera. We wanted to try the new updates but we can't even get the acap to run anymore.

Are we missing any update? Other examples also had similar problems. Please let us know what can be done to resolve this.

Thanks

Help required for HTTP API (CGI endpoints)

I am trying to deploy an app on axis camera AXIS Q1798-LE . I am trying to incorporate my cgi endpoints by trying to run given sample.
But I am having 403 Forbidden.
Here is my manifest.json

{
    "schemaVersion": "1.1",
    "acapPackageConf": {
        "setup": {
            "appName": "main",
            "vendor": "vendor",
            "embeddedSdkVersion": "3.0",
            "user": {
                "username": "sdk",
                "group": "sdk"
            },
            "runMode": "respawn",
            "version": "0.0.1"
        },
        "configuration": {
            "settingPage": "index.html",
            "httpConfig": [{
                    "type": "fastCgi",
                    "name": "/test1.cgi",
                    "access": "viewer"
                },
                {
                    "type": "fastCgi",
                    "name": "/test2.cgi",
                    "access": "viewer"
                }
            ]
        }
    }
}

Any idea what is causing this?

Model format incorrect in Object-Detection example

object_detection_app_1_0_0_armv7hf.zip
Issue description
We clone the most update acap3-examples and tried Object-detection example.
We compile the acap with command : ./build_acap.sh object_detection_acap:1.0 and
the acap is built successfully.
But then we got the following error messing when running in axis tpu camera
----- Contents of SYSTEM_LOG for 'object_detection' -----
2021-07-14T12:23:36.307+08:00 axis-b8a44f0ec925 [ INFO ] object_detection[0]: starting object_detection
2021-07-14T12:23:36.342+08:00 axis-b8a44f0ec925 [ INFO ] object_detection[15686]: Starting ...
2021-07-14T12:23:36.450+08:00 axis-b8a44f0ec925 [ INFO ] object_detection[15686]: Creating VDO image provider and creating stream 480 x 300
2021-07-14T12:23:36.450+08:00 axis-b8a44f0ec925 [ INFO ] object_detection[15686]: Dump of vdo stream settings map =====
2021-07-14T12:23:36.450+08:00 axis-b8a44f0ec925 [ INFO ] object_detection[15686]: chooseStreamResolution: We select stream w/h=480 x 300 based on VDO channel info.
2021-07-14T12:23:36.553+08:00 axis-b8a44f0ec925 [ INFO ] object_detection[716]: Last message 'Dump of vdo stream s' repeated 1 times, suppressed by syslog-ng on axis-b8a44f0ec925
2021-07-14T12:23:36.616+08:00 axis-b8a44f0ec925 [ INFO ] object_detection[15686]: Setting up larod connection with chip 4 and model /usr/local/packages/object_detection/model/converted_model.tflite
2021-07-14T12:23:36.759+08:00 axis-b8a44f0ec925 [ ERR ] object_detection[15686]: setupLarod: Unable to load model: Could not load model: Failed model format type check, expected a TFLite model converted to .larod format

System setup

  • Device:
  • Device Firmware version: 10.5.2
  • SDK version :3.4

Logs
Jul 14 12:37:36 axis-b8a44f0ec925 object_detection[17200]: Starting ...
Jul 14 12:37:36 axis-b8a44f0ec925 object_detection[17200]: chooseStreamResolution: We select stream w/h=480 x 300 based on VDO channel info.
Jul 14 12:37:36 axis-b8a44f0ec925 object_detection[17200]: Creating VDO image provider and creating stream 480 x 300
Jul 14 12:37:36 axis-b8a44f0ec925 object_detection[17200]: Dump of vdo stream settings map =====
Jul 14 12:37:36 axis-b8a44f0ec925 object_detection[17200]: Dump of vdo stream settings map =====
Jul 14 12:37:36 axis-b8a44f0ec925 object_detection[17200]: Setting up larod connection with chip 4 and model /usr/local/packages/object_detection/model/converted_model.tflite
Jul 14 12:37:36 axis-b8a44f0ec925 larod[1998]: Created a new session ID: 7, client: :1.476
Jul 14 12:37:36 axis-b8a44f0ec925 larod[1998]: Session 7: Failed model format type check, expected a TFLite model converted to .larod format
Jul 14 12:37:36 axis-b8a44f0ec925 object_detection[17200]: setupLarod: Unable to load model: Could not load model: Failed model format type check, expected a TFLite model converted to .larod format
Jul 14 12:37:36 axis-b8a44f0ec925 larod[1998]: Session 7 killed since client's (:1.476) connection has been lost
Jul 14 12:37:39 axis-b8a44f0ec925 dbus-daemon[340]: [system] Activating via systemd: service name='com.axis.BasicDeviceInfo1' unit='basic-device-info.service' requested by ':1.478' (uid=155 pid=17236 comm="basicdeviceinfo.cgi ")
Jul 14 12:37:39 axis-b8a44f0ec925 systemd[1]: Starting Basic device information...
Jul 14 12:37:39 axis-b8a44f0ec925 dbus-daemon[340]: [system] Successfully activated service 'com.axis.BasicDeviceInfo1'
Jul 14 12:37:39 axis-b8a44f0ec925 systemd[1]: Started Basic device information.
Jul 14 12:37:39 axis-b8a44f0ec925 dbus-daemon[340]: [system] Activating via systemd: service name='com.axis.ApiDiscovery1' unit='dbus-com.axis.ApiDiscovery1.service' requested by ':1.481' (uid=155 pid=17245 comm="apidiscovery.cgi ")
Jul 14 12:37:39 axis-b8a44f0ec925 systemd[1]: Starting API Discovery service...
Jul 14 12:37:39 axis-b8a44f0ec925 dbus-daemon[340]: [system] Successfully activated service 'com.axis.ApiDiscovery1'
Jul 14 12:37:39 axis-b8a44f0ec925 systemd[1]: Started API Discovery service.
Jul 14 12:37:39 axis-b8a44f0ec925 dbus-daemon[340]: [system] Activating via systemd: service name='com.axis.FirmwareManager1' unit='dbus-com.axis.FirmwareManager1.service' requested by ':1.484' (uid=155 pid=17258 comm="firmwaremanagement.cgi ")

Install EAP via SSH

How do we install our .eap files on to the camera via ssh?
I've seen some reference eap-install.sh, but I don't know where to get a hold of that.

LAROD_ERROR_LOAD_MODEL

I've successfully converted the SSD MobileNet V2 FPNLite 320x320 model from here to a quantized TF Lite. Then I converted the TF Lite to Edge TPU:

Edge TPU Compiler version 15.0.340273435

Model compiled successfully in 998 ms.

Input model: model.tflite
Input size: 3.72MiB
Output model: model_edgetpu.tflite
Output size: 4.16MiB
On-chip memory used for caching model parameters: 3.42MiB
On-chip memory remaining for caching model parameters: 4.31MiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 167
Operation log: model_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 118
Number of operations that will run on CPU: 49
See the operation log file for individual operation details.

Now when I run larodLoadModel(...) if gives an error with code -2 (LAROD_ERROR_LOAD_MODEL: 'General error for loading model'). Why is this?

It does work when I follow the same conversion path for the SSD MobileNet v2 320x320 model. Is it the FPNLite? Can I only deploy models that fully run on the TPU, and if so, why?

It would also be nice if the error was a bit more specific ☺

Own service as ACAP application

Hello!
I'm working on adding an MQTT broker as an ACAP application. I've built mosquitto and deployed it as a standalone application, and everything works fine with my config file. Now I would like to pack it in ACAP package, but unfortunately it doesn't work. In logs, I can only see (log comes from server report of camera)

"/usr/local/packages/mosquitto_fake/mosquitto_fake -c /usr/local/packages/mosquitto_fake/mosquitto.conf".
2021-08-27T16:07:08.949+02:00 axis-accc8ed32954 [ INFO    ] respawnd[23104]: Respawning 

During my investigation of the problem, I've changed the name of executable, because on the newest firmware (10.6) there is another process with mosquito which comes from system internals.

When I start the mosquitto from a shell, everything works well. When the mosquitto is run by ACAP I got only "Connection refused".

My package.conf:

PACKAGENAME="Broker Mosquitto"
MENUNAME='Broker Mosquitto'
APPTYPE="armv7hf"
APPNAME="mosquitto_fake"
APPID=""
LICENSENAME="Available"
LICENSEPAGE="axis"
VENDOR="Eclipse Mosquitto"
REQEMBDEVVERSION="2.0"
APPMAJORVERSION="2"
APPMINORVERSION="0"
APPMICROVERSION="11"
APPGRP="sdk"
APPUSR="sdk"
APPOPTS="-c /usr/local/packages/mosquitto_fake/mosquitto.conf"
OTHERFILES="libmosquitto.so libmosquitto.so.1 libmosquitto.so.2.0.11 mosquitto_passwd mosquitto_pub mosquitto_rr mosquitto_sub p2.txt mosquitto.conf"
SETTINGSPAGEFILE=""
SETTINGSPAGETEXT=""
VENDORHOMEPAGELINK=''
PREUPGRADESCRIPT=""
POSTINSTALLSCRIPT=""
STARTMODE="respawn"
HTTPCGIPATHS=""

Do you have any idea why mosquito won't be started by camera?

Python SDK

Does AXIS support Python SDK so that developers can use this easy language for developing apps?
Thanks!

Unable to build by following the instructions

Issue description
The various examples I tried (vdostream and axoverlay) couldn't be built by following exactly the documentation: the docker build steps fails, because eap-create.sh wants more info.
docker build is unable to be interactive, and the command stops.
See log below

Either the doc should recommend to create a package.conf file with the required options, or the build procedure should be modified completely.

System setup
Windows 10 21H1 / Docker for windows (Hyper-V) / git bash

Logs

 > [4/4] RUN . /opt/axis/acapsdk/environment-setup* && create-package.sh:
#7 1.730 make
#7 1.739 arm-linux-gnueabihf-gcc  -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/axis/acapsdk/sysroots/armv7hf -L /opt/axis/acapsdk/sysroots/armv7hf/usr/lib  -O2 -pipe -g -feliminate-unused-debug-types  -pthread -I/opt/axis/acapsdk/sysroots/armv7hf/usr/include/vdo -I/opt/axis/acapsdk/sysroots/armv7hf/usr/include/gio-unix-2.0 -I/opt/axis/acapsdk/sysroots/armv7hf/usr/include/glib-2.0 -I/opt/axis/acapsdk/sysroots/armv7hf/usr/lib/glib-2.0/include -Wall -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -Wl,-z,relro,-z,now vdoencodeclient.c  -lvdostream -lgio-2.0 -lgobject-2.0 -lglib-2.0 -o vdoencodeclient
#7 1.962
#7 1.962 eap-create.sh
#7 1.971
#7 1.971 No package.conf file foundCreated package.conf using default values... ok
#7 2.087 Reading local package.conf... ok
#7 2.089 Package architecture: armv7hf
#7 2.198 Validating Package config...
#7 2.207
#7 2.207 [Error] * PACKAGENAME cannot be empty
#7 2.208 We need to fix package.conf Please answer the following questions:
#7 2.208
#7 2.214 * PACKAGENAME cannot be empty
#7 2.219 PACKAGENAME: Package name or description []
#7 2.220 eap-create.sh failed. Please fix above errors, before you can create a package

AxOverlay example not working

Issue description
When running the AxOverlay example, I am expecting to visibly see overlays in the 1920x1080 appearing in the stream but nothing appears. The logs show no errors when running and I have followed all steps in the example.

Possible issue: There is an existing text overlay by the camera saying there isn't enough power.
"Not enough power, product requires POE class 4 or higher"
I'm not sure if this affects my own overlays but hopefully it wouldn't. I am providing the camera with 60W (rather than 75W) but that's the best I can do.

System setup

  • AXIS product/device (e.g., Q1615 Mk III): P3255-LVE
  • Device firmware version (e.g., 10.5): 10.6
  • SDK version (e.g., ACAP SDK 3.3): 3.4

Logs
2021-09-01T11:37:16.766+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Render callback for camera: 1
2021-09-01T11:37:16.766+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Render callback for overlay: 1920 x 1088
2021-09-01T11:37:16.766+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Render callback for stream: 1920 x 1080
2021-09-01T11:37:16.771+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Adjust callback for overlay: 1920 x 1080
2021-09-01T11:37:16.771+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Adjust callback for stream: 1920 x 1080
2021-09-01T11:37:16.788+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Render callback for camera: 1
2021-09-01T11:37:16.788+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Render callback for overlay: 1920 x 1088
2021-09-01T11:37:16.788+10:00 axis-b8a44f0e42b1 [ INFO ] axoverlay[601]: Render callback for stream: 1920 x 1080
.... (Continuously)

Saving persistent data in ACAP filesystem

I would like to save a small amount of historical data from my ACAP application (around 500 numbers per day for say, the previous three months. Something in the neighborhood of 100K of data). I know that I could create a file under "/tmp" and append new data to that file but I am worried that "/tmp" gets cleaned out if the camera restarts. However I cannot find a non-temporary location in the camera filesystem that my ACAP has permission to write to. How can I store persistent data into the ACAP filesystem and be (relatively) confident that the data won't be lost if the camera is reset or restarted?
Thanks very much.
Wilf

multiple output example

Hello,
I'm trying to implement an object detector following your guide on "tensorflow-to-larod".
I managed to convert the model to .larod and I mapped the input and output tensors to file descriptors.
As reference, considering N outputs (in my model are 12):

larodOutputsAddr` = (void **) malloc ( numOutputs * sizeof(void *));
    for(int i=0; i < numOutputs; i++){
        larodOutputsAddr[i] = MAP_FAILED;
        if (!createAndMapTmpFile(CONV_OUT_FILES[i], args.outputBytes, &larodOutputsAddr[i], &larodOutputsFd[i])) {
            goto end;
        }
    }
for(int i=0; i < numOutputs ; i++) {
        if (!larodSetTensorFd(outputTensors[i], larodOutputsFd[i], &error)) {
            syslog(LOG_ERR, "Failed setting output tensor fd: %s", error->msg);
            goto end;
        }
    }

The code above run correctly and I was able to run larodRunInference( ) without any issues.
Unfortunately, I'm having some troubles when I have to read the output tensors. I think its because each output tensor its not a single scalar but it may be an array or for example in python a dictionary.
Do you have any advice on how to read multiple output tensors or would you provide an Object detection example acap?
Thank you ,
Walter

make failed for 'object_detection'

Hi,

When I simply ran the 1st step ./build_acap.sh object_detection_acap:1.0, got the error, it seems like there are some error in object_detection.c, or did I do something wrong?

`Step 30/30 : RUN . /opt/axis/acapsdk/environment-setup* && create-package.sh
 ---> Running in 3e68e3a7bfa2
make
arm-linux-gnueabihf-gcc  -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/axis/acapsdk/sysroots/armv7hf  -O2 -pipe -g -feliminate-unused-debug-types  -Iinclude -pthread -I/opt/axis/acapsdk/sysroots/armv7hf/usr/include/vdo -I/opt/axis/acapsdk/sysroots/armv7hf/usr/include/gio-unix-2.0 -I/opt/axis/acapsdk/sysroots/armv7hf/usr/include/glib-2.0 -I/opt/axis/acapsdk/sysroots/armv7hf/usr/lib/glib-2.0/include -Wall -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -fstack-protector-strong -Wl,-z,relro,-z,now -L./lib -Wl,-rpath,'$ORIGIN/lib' object_detection.c argparse.c imgconverter.c imgprovider.c imgutils.c -lyuv -ljpeg -lvdostream -lgio-2.0 -lgobject-2.0 -lglib-2.0 -llarod -lsystemd -o object_detection
object_detection.c: In function 'setupLarod':
object_detection.c:328:14: warning: implicit declaration of function 'larodSetChip'; did you mean 'larodListChips'? [-Wimplicit-function-declaration]
  328 |         if (!larodSetChip(conn, larodChip, &error)) {
      |              ^~~~~~~~~~~~
      |              larodListChips
object_detection.c:336:34: error: incompatible type for argument 4 of 'larodLoadModel'
  336 |                                  "object_detection", &error);
      |                                  ^~~~~~~~~~~~~~~~~~
      |                                  |
      |                                  char *
In file included from argparse.h:25,
                 from object_detection.c:57:
/opt/axis/acapsdk/sysroots/armv7hf/usr/include/larod.h:568:13: note: expected 'larodAccess' {aka 'const enum <anonymous>'} but argument is of type 'char *'
  568 | larodModel* larodLoadModel(larodConnection* conn, const int fd,
      |             ^~~~~~~~~~~~~~
object_detection.c:336:54: warning: passing argument 5 of 'larodLoadModel' from incompatible pointer type [-Wincompatible-pointer-types]
  336 |                                  "object_detection", &error);
      |                                                      ^~~~~~
      |                                                      |
      |                                                      larodError ** {aka struct <anonymous> **}
In file included from argparse.h:25,
                 from object_detection.c:57:
/opt/axis/acapsdk/sysroots/armv7hf/usr/include/larod.h:568:13: note: expected 'const char *' but argument is of type 'larodError **' {aka 'struct <anonymous> **'}
  568 | larodModel* larodLoadModel(larodConnection* conn, const int fd,
      |             ^~~~~~~~~~~~~~
object_detection.c:335:19: error: too few arguments to function 'larodLoadModel'
  335 |     loadedModel = larodLoadModel(conn, larodModelFd, LAROD_ACCESS_PRIVATE,
      |                   ^~~~~~~~~~~~~~
In file included from argparse.h:25,
                 from object_detection.c:57:
/opt/axis/acapsdk/sysroots/armv7hf/usr/include/larod.h:568:13: note: declared here
  568 | larodModel* larodLoadModel(larodConnection* conn, const int fd,
      |             ^~~~~~~~~~~~~~
object_detection.c: In function 'main':
object_detection.c:393:5: error: unknown type name 'larodInferenceRequest'
  393 |     larodInferenceRequest* infReq = NULL;
      |     ^~~~~~~~~~~~~~~~~~~~~
object_detection.c:547:14: warning: implicit declaration of function 'larodCreateInferenceRequest'; did you mean 'larodCreateJobRequest'? [-Wimplicit-function-declaration]
  547 |     infReq = larodCreateInferenceRequest(model, inputTensors, numInputs, outputTensors,
      |              ^~~~~~~~~~~~~~~~~~~~~~~~~~~
      |              larodCreateJobRequest
object_detection.c:547:12: warning: assignment to 'int *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
  547 |     infReq = larodCreateInferenceRequest(model, inputTensors, numInputs, outputTensors,
      |            ^
object_detection.c:641:14: warning: implicit declaration of function 'larodRunInference' [-Wimplicit-function-declaration]
  641 |         if (!larodRunInference(conn, infReq, &error)) {
      |              ^~~~~~~~~~~~~~~~~
object_detection.c:769:5: warning: implicit declaration of function 'larodDestroyInferenceRequest'; did you mean 'larodDestroyJobRequest'? [-Wimplicit-function-declaration]
  769 |     larodDestroyInferenceRequest(&infReq);
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |     larodDestroyJobRequest
imgprovider.c: In function 'threadEntry':
imgprovider.c:432:1: warning: no return statement in function returning non-void [-Wreturn-type]
  432 | }
      | ^
make: *** [Makefile:22: object_detection] Error 1

make failed. Please fix above errors, before you can create a packageThe command '/bin/sh -c . /opt/axis/acapsdk/environment-setup* && create-package.sh' returned a non-zero code: 1
Unable to find image 'object_detection_acap:1.0' locally
Error response from daemon: pull access denied for object_detection_acap, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
must specify at least one container source`

Object Detection - with MQTT

I want to use the object-detection code but with MQTT.
I see you have a "subscribe-to-event" and "send-event" code. Can this be implemented with the object-detection?
Any ideas on how to do that?

Storing stream

I need some advice on how one can accomplish storing the stream from the camera on to a computer

Object detection

Hello,
I have followed the instruction to generate the .eap file and installed it on a AXIS Q1615 Mk III camera. However, when I start the app, it never change the status to "running" and the logs don't provide any detection of the object. I tested the TensorFlow to larod app and that works.

Could you provide a .eap to test it or tell me what happens?

Error in larod inference request

Hello,
I'm trying to build an acap for object detection to run on an axis device with a TPU.
I'm adapting the app vdo-larod by changing the larod model with an obj-det one (larod converted), and it connect successfully to the stream video.
However, I'm not able to create an inference request. I get "Failed creating inference request: Input tensors are invalid: Tensor 1 is null", do you know there this error could come from?

Larod - Firmware 10.4.x issue for larodLoadModel

Yesterday I discovered that our app does not work anymore with the latest 10.4.5 firmware for the P3255 LVE camera. When I downgrade to the latest available firmware that I can download 10.3.1 the same code runs again.

It seems that larodLoadModel now returns a zero pointer but the larodError is also a zero pointer. In a few random some cases I get a ‘Segmentation fault (core dumped)’.

This is all code that runs perfectly fine on previous firmware’s… Did I miss a release note?

How to get rtsp link inside of Axis Encoder?

I'm trying to push the stream to AWS Kinesis by Gstreamer inside of a P7304 4-channel encoder. But one problem is I don't know what exactly the RTSP link from inside of the Axis Encoder? Could you tell me how could I get the RTSP link?

Zero copy external data to larodTensor

Issue description
Trying to find a way to assign input pointer rather than go via file descriptor/buffer method for creating a larodTensor.

In the examples, the buffer is used to generate the data to, so no extra copy step is required but this is uncommon in a real scenario.

System setup

  • AXIS product/device (e.g., Q1615 Mk III): Q1615 Mk III
  • Device firmware version (e.g., 10.5): 10.6
  • SDK version (e.g., ACAP SDK 3.3): 3.3

Example

// Setup
infInputs = larodAllocModelInputs(conn, model, 0, NULL, NULL, &error);
setupInfInputMapping(infInputs[0], &larodInputFd, &larodInputSize, &infInputBuf);

// Loop
...
    // TODO: Zero copy?
    memcpy(infInputBuf, img->data(), larodInputSize);
...

Object Detection - Output bytes for tensor

Where do you get the output bytes from? Why is tensor1 80 * 4, tensor2 20 * 4... etc ?
Explain where you get these values from.

const unsigned int FLOATSIZE = 4;
const unsigned int TENSOR1SIZE = 80 * FLOATSIZE;
const unsigned int TENSOR2SIZE = 20 * FLOATSIZE;
const unsigned int TENSOR3SIZE = 20 * FLOATSIZE;
const unsigned int TENSOR4SIZE = 1 * FLOATSIZE;

How to discover other AXIS cameras on the network from ACAP?

I understand that it is possible in principle to discover AXIS cameras on the local network - this is what AXIS Device Manager does after all. Is it possible to implement this functionality in an ACAP application, and if so, how? That is, can I write an ACAP application that discovers other AXIS cameras on the network and then, for example, writes their IP addresses into the ACAP application's local log file?
Thanks,
Wilf.

Technical Specifications of TPU

Issue description
Is there any where that describes the technical specifications of the Google TPU onboard these cameras?

From what I can tell these two cameras I had came with two different TPUs.
P3255-LVE says it has a Google 'Yorktown'
Q1615 Mk III says it has a Google 'Apex'

I can't find any information about this Yorktown and not sure of the capability differences of it. I have had to return the P3255-LVE now but would like to know of its capability relative to the other Google TPUs.

I thought this would be the right repository to put this in. If it's not, you can redirect me. Thanks.

System setup

  • AXIS product/device (e.g., Q1615 Mk III): Q1615 Mk III, P3255-LVE
  • Device firmware version (e.g., 10.5): 10.6
  • SDK version (e.g., ACAP SDK 3.3): 3.3

Web UI notification

Hi,

I'm wondering if there is an API to show the ACAP app's custom notifications for the user on the Web UI?

There are standard notifications like below but I didn't find a way to reuse them for custom notifications of my application.
image

Thanks.

Could not allocate tensors

I am getting an issue with running the example

/tensorflow_to_larod /usr/local/packages/tensorflow_to_larod/
model/converted_model.larod 128 128 1 -c 2

setupLarod: Unable to load model: Could not load model: Could not allocate tensors

if I run it on chip 4 as in example it says

2025-05-22T11:46:03.281+02:00 axis-b8a44f087b50 [ INFO    ] tensorflow_to_larod[2054]: Starting ...
2025-05-22T11:46:03.291+02:00 axis-b8a44f087b50 [ INFO    ] tensorflow_to_larod[2054]: chooseStreamResolution: We select stream w/h=240 x 160 based on VDO channel info.
2025-05-22T11:46:03.292+02:00 axis-b8a44f087b50 [ INFO    ] tensorflow_to_larod[2054]: Creating VDO image provider and creating stream 240 x 160
2025-05-22T11:46:03.292+02:00 axis-b8a44f087b50 [ INFO    ] tensorflow_to_larod[2054]: Dump of vdo stream settings map =====
2025-05-22T11:46:03.313+02:00 axis-b8a44f087b50 [ INFO    ] tensorflow_to_larod[2054]: Setting up larod connection with chip 4 and model /usr/local/packages/tensorflow_to_larod/model/converted_model.larod
2025-05-22T11:46:03.326+02:00 axis-b8a44f087b50 [ ERR     ] tensorflow_to_larod[2054]: setupLarod: Could not select chip 4: Could not find chip 4

Setting up Larod fails with object detection example

Issue description
Running the object detection example on Axis Q1615 MK III fails and results in the following error
object_detection[11386]: setupLarod: Could not connect to larod: Could not create session: Message recipient disconnected from message bus without replying

We previously had issues with firmware version 10.5.2 but upgrading to 10.6.0 fixed those issues. Now we are having these issues even with firmware 10.6.0 despite trying to factory reset + reboot multiple times.

System setup
Axis Q1615 Mk III
Firmware version 10.6

Logs

Journalctl logs
- Sep 08 10:27:54  sdkobject_detection[2027]:  * Starting object_detection...
- Sep 08 10:27:54  object_detection[2039]: starting object_detection
- Sep 08 10:27:54  sdkobject_detection[2027]: [ ok ]
- Sep 08 10:27:54  systemd[1]: Started object_detection.
- Sep 08 10:27:54 object_detection[2042]: Starting ...
- Sep 08 10:27:55 object_detection[2042]: chooseStreamResolution: We select stream w/h=480 x 300 based on VDO channel info.
- Sep 08 10:27:55 object_detection[2042]: Creating VDO image provider and creating stream 480 x 300
- Sep 08 10:27:55  object_detection[2042]: Dump of vdo stream settings map =====
- Sep 08 10:27:55 object_detection[2042]: Dump of vdo stream settings map =====
- Sep 08 10:27:55  acapctl[2056]: Successfully listed all applications
- Sep 08 10:27:55 i object_detection[2042]: Setting up larod connection with chip 4 and model /usr/local/packages/object_detection/model/converted_model.tflitec
- Sep 08 10:27:55  dbus-daemon[390]: [system] Activating via systemd: service name='com.axis.Larod1' unit='dbus-com.axis.Larod1.service' requested by ':1.333' (uid=204 pid=2042 comm="/usr/local/packages/object_detection/object_detect")
- Sep 08 10:27:55  systemd[1]: Starting Machine learning service...
- Sep 08 10:27:55  dbus-daemon[390]: [system] Successfully activated service 'com.axis.Larod1'
- Sep 08 10:27:55  systemd[1]: Started Machine learning service.
- Sep 08 10:27:55  larod[2058]: Could not create service: Could not initialize TPU
- Sep 08 10:27:55  systemd[1]: larod.service: Main process exited, code=exited, status=1/FAILURE
- Sep 08 10:27:55 systemd[1]: larod.service: Failed with result 'exit-code'.
- Sep 08 10:27:55  object_detection[2042]: setupLarod: Could not connect to larod: Could not create session: Message recipient disconnected from message bus without replying
- Sep 08 10:27:55  systemd[1]: larod.service: Scheduled restart job, restart counter is at 1.
- Sep 08 10:27:55 systemd[1]: Stopped Machine learning service.
- Sep 08 10:27:55  systemd[1]: Starting Machine learning service...
- Sep 08 10:27:55  systemd[1]: Started Machine learning service.
- Sep 08 10:27:55  larod[2082]: Could not create service: Could not initialize TPU
- Sep 08 10:27:55  systemd[1]: larod.service: Main process exited, code=exited, status=1/FAILURE
- Sep 08 10:27:55  systemd[1]: larod.service: Failed with result 'exit-code'.
- Sep 08 10:27:55  systemd[1]: larod.service: Scheduled restart job, restart counter is at 2.
- Sep 08 10:27:55  systemd[1]: Stopped Machine learning service.
- Sep 08 10:27:55  systemd[1]: Starting Machine learning service...
- Sep 08 10:27:55  systemd[1]: Started Machine learning service.
- Sep 08 10:27:55 larod[2085]: Could not create service: Could not initialize TPU
- Sep 08 10:27:55  systemd[1]: larod.service: Main process exited, code=exited, status=1/FAILURE
- Sep 08 10:27:55 systemd[1]: larod.service: Failed with result 'exit-code'.
- Sep 08 10:27:56  systemd[1]: larod.service: Scheduled restart job, restart counter is at 3.
- Sep 08 10:27:56 systemd[1]: Stopped Machine learning service.
- Sep 08 10:27:56  systemd[1]: Starting Machine learning service...
- Sep 08 10:27:56  systemd[1]: Started Machine learning service.
- Sep 08 10:27:56  larod[2088]: Could not create service: Could not initialize TPU
- Sep 08 10:27:56  systemd[1]: larod.service: Main process exited, code=exited, status=1/FAILURE
- Sep 08 10:27:56  systemd[1]: larod.service: Failed with result 'exit-code'.
- Sep 08 10:27:56  systemd[1]: larod.service: Scheduled restart job, restart counter is at 4.
- Sep 08 10:27:56  systemd[1]: Stopped Machine learning service.
- Sep 08 10:27:56 systemd[1]: Starting Machine learning service...
- Sep 08 10:27:56  systemd[1]: Started Machine learning service.
- Sep 08 10:27:56  larod[2091]: Could not create service: Could not initialize TPU
- Sep 08 10:27:56  systemd[1]: larod.service: Main process exited, code=exited, status=1/FAILURE
- Sep 08 10:27:56 systemd[1]: larod.service: Failed with result 'exit-code'.
- Sep 08 10:27:56  systemd[1]: larod.service: Scheduled restart job, restart counter is at 5.
- Sep 08 10:27:56  systemd[1]: Stopped Machine learning service.
- Sep 08 10:27:56 systemd[1]: larod.service: Start request repeated too quickly.
- Sep 08 10:27:56 systemd[1]: larod.service: Failed with result 'exit-code'.
- Sep 08 10:27:56 systemd[1]: Failed to start Machine learning service.
- Sep 08 10:28:06 viewarea[1813]: main.c:306 [run_service] Terminating...

`

Get Tensor Details

The Tensorflow Lite interpreter (Python) has some operations to get details of input/output tensors. In my case, the models tensors contain metadata in the quantization and quantization_parameters properties of the Tensor that I need. I cannot find a Larod operation that allows me to retrieve these values.

In the TFLite Python Interpreter, the tensor metadata can be obtained using interpreter.get_input_details() and interpreter.get_output_details() documented here.

I guess this SO answer might be helpful here as well.

Inference times

Issue description
When I tried to compare the inference times of mobilenet v1ssd 300*300 on Axis Q1615 Mk 3 it is taking 15 ms and the standard given by google is for usb 3.0 edge tpu is 6.5 ms ..
why there is some much difference in the inference times

System setup

  • AXIS product/device:- Q1615 Mk III
  • Device firmware version:- 10.6
  • SDK version:- ACAP SDK 3.3

tensorflow-to-larod - Recognize more objects

I want to use the tensorflow-to-larod repo to recognize car, person but also a dag. What do I need to do to account for that?
Is the training data set only meant for person and car?

Possible for app to access RTSP stream without a username/password account?

I'd like my app to make use of the RTSP stream, however it seems to requires the user to input the username and password. Is it possible for the app to access the rtsp stream without username and password?

i.e. local rtsp server detects that the connection is coming from local host and allows the connection without password requirement.

How to access the frames from each video stream?

We would like to build an application which reads the video streams from a P7304 4-channel encoder so we can pass it to AWS Kinesis Producer C SDK (https://github.com/awslabs/amazon-kinesis-video-streams-producer-c/blob/master/samples/KvsVideoOnlyStreamingSample.c).
Sorry if this is a foolish question, as we are not yet very familiar with the ACAP SDK.
We see that there is a a sample application that looks very relevant; https://github.com/AxisCommunications/acap3-examples/tree/master/vdostream , but as there are several streams available on this encoder, please could you advice us how to select a given stream? "

Storing Camera Feed To The Cloud

Is it possible to store the camera feed from the axis camera to the cloud. It would be very promising for someone who wish to train their own Machine Learning models.
Help would be appreciated.

Error setting AxParameter value

Hello,

I've been unable to set the value of my AxParameters for my application under the camera webpage. After some research I found that the problem was that my appname contains an underscore (eg : foo_bar). Therefore when the request axis-cgi/param.cgi?action=update is done I receive the following message :
# Error: Error setting 'root.Foo.bar.AudioSource0' to 'test'
As you can see the underscore is replaced by a dot misleading the process saving the data to the param.conf

I think it would be nice to restrict the app name to not use underscore or even better fix directly that underscore transformation.

Device unique ID

Hello, I have been looking for a long while how to get the device's id on Linux, but at this moment I am stuck because I haven't found any common way. But I noticed that a serial number is shown on the camera's page (in the Information window). Is there any way to get that with C API?

real time low latency stream with ROS

Issue description
A description of the issue you encountered or the feature you're requesting.
Hello. I'm trying to establish a vehicle platform with ROS for autonomous driving. I've successfully used OpenCV to access the RTSP stream. But it has an unacceptable long latency (around 5 seconds). I need a low latency real-time stream.

Is there any way that can help me access the stream? It must be c++ or python.

Thanks!

System setup

  • *AXIS product/device: P1375E
  • Device firmware version (e.g., 10.5):
  • SDK version (e.g., ACAP SDK 3.3):

AxHttp access level

Hello!

Thanks again for the fast response to my previous issue. Now I have a question about the access level in AXHttp library. I can see there three levels: viewer, operator and administrator unfortunately in the documentation I can't see any differences and when trying to make an HTTP request all levels require providing login and password for root user. Is there any value that would allow me to send requests without any authentication within the camera? I need that to integrate two ACAP applications but one of them, which is the source of events, is a 3rd party app and doesn't have support for authentication.

Larod Converter > PyPi?

Will the LarodConverter Python package every be published on PyPi? I'm converting my models as part of a CI/CD pipeline and it would be great to be able to convert TF Lite (etc) models to Larod without needing the entire Acap SDK!

As a workaround, I've extracted the whl file and I install it but it would be nice to have a centralized/versioned way to work with this package, much like all other packages do.

vdoencodeclient output file is corrupt

hello,

i am trying to test the vdoencodeclient application. i'm able to successfully build and install it on the device. via ssh, i am able to invoke it via something like ./vdoencodeclient --format h264 --frames 60 --output wtf.mp4. however, the output file appears to be invalid. it is not able to be played via vlc (attempted multiple extensions – .mp4, .avc, .h264. the latter actually gave me something with tons of artifacts but had at least something visible).

jpeg format successfully gives me a single frame, whereas the h265 format errors out immediately with:

2020-09-29T00:04:00.343-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[8143]: vdo-encode-client: DBus call failed: Message recipient disconnected from message bus without replying

below is an example of a log with an example file:

2020-09-28T23:55:42.726-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: Starting stream: h264, 1024x768, 5 fps
2020-09-28T23:55:42.787-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    0, type = I, size = 26617
2020-09-28T23:55:42.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    1, type = P, size = 854
2020-09-28T23:55:43.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    2, type = P, size = 2645
2020-09-28T23:55:43.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    3, type = P, size = 3115
2020-09-28T23:55:43.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    4, type = P, size = 1153
2020-09-28T23:55:43.787-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    5, type = P, size = 1986
2020-09-28T23:55:43.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    6, type = P, size = 578
2020-09-28T23:55:44.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    7, type = P, size = 935
2020-09-28T23:55:44.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    8, type = P, size = 1153
2020-09-28T23:55:44.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =    9, type = P, size = 609
2020-09-28T23:55:44.787-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   10, type = P, size = 2808
2020-09-28T23:55:44.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   11, type = P, size = 479
2020-09-28T23:55:45.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   12, type = P, size = 648
2020-09-28T23:55:45.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   13, type = P, size = 667
2020-09-28T23:55:45.585-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   14, type = P, size = 662
2020-09-28T23:55:45.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   15, type = P, size = 1872
2020-09-28T23:55:45.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   16, type = P, size = 777
2020-09-28T23:55:46.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   17, type = P, size = 1147
2020-09-28T23:55:46.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   18, type = P, size = 2772
2020-09-28T23:55:46.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   19, type = P, size = 1368
2020-09-28T23:55:46.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   20, type = P, size = 2885
2020-09-28T23:55:46.987-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   21, type = P, size = 520
2020-09-28T23:55:47.187-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   22, type = P, size = 613
2020-09-28T23:55:47.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   23, type = P, size = 1083
2020-09-28T23:55:47.585-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   24, type = P, size = 1869
2020-09-28T23:55:47.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   25, type = P, size = 1522
2020-09-28T23:55:47.987-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   26, type = P, size = 614
2020-09-28T23:55:48.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   27, type = P, size = 1023
2020-09-28T23:55:48.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   28, type = P, size = 1588
2020-09-28T23:55:48.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   29, type = P, size = 994
2020-09-28T23:55:48.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   30, type = I, size = 26176
2020-09-28T23:55:48.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   31, type = P, size = 896
2020-09-28T23:55:49.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   32, type = P, size = 893
2020-09-28T23:55:49.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   33, type = P, size = 860
2020-09-28T23:55:49.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   34, type = P, size = 845
2020-09-28T23:55:49.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   35, type = P, size = 2049
2020-09-28T23:55:49.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   36, type = P, size = 664
2020-09-28T23:55:50.187-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   37, type = P, size = 821
2020-09-28T23:55:50.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   38, type = P, size = 929
2020-09-28T23:55:50.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   39, type = P, size = 841
2020-09-28T23:55:50.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   40, type = P, size = 3311
2020-09-28T23:55:50.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   41, type = P, size = 506
2020-09-28T23:55:51.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   42, type = P, size = 654
2020-09-28T23:55:51.387-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   43, type = P, size = 738
2020-09-28T23:55:51.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   44, type = P, size = 855
2020-09-28T23:55:51.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   45, type = P, size = 2024
2020-09-28T23:55:51.986-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   46, type = P, size = 584
2020-09-28T23:55:52.187-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   47, type = P, size = 857
2020-09-28T23:55:52.387-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   48, type = P, size = 864
2020-09-28T23:55:52.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   49, type = P, size = 854
2020-09-28T23:55:52.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   50, type = P, size = 3295
2020-09-28T23:55:52.985-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   51, type = P, size = 546
2020-09-28T23:55:53.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   52, type = P, size = 656
2020-09-28T23:55:53.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   53, type = P, size = 773
2020-09-28T23:55:53.586-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   54, type = P, size = 790
2020-09-28T23:55:53.786-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   55, type = P, size = 3497
2020-09-28T23:55:53.987-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   56, type = P, size = 10419
2020-09-28T23:55:54.186-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   57, type = P, size = 6182
2020-09-28T23:55:54.386-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   58, type = P, size = 6215
2020-09-28T23:55:54.587-04:00 axis-accc8ef3c0d4 [ INFO    ] vdoencodeclient[7464]: frame =   59, type = P, size = 14005

video is available here: https://drive.google.com/file/d/1vszkr6_fOMGETjtfK-5aGa-6rFgLBQ5b/view?usp=sharing

appreciate any feedback. thanks!

AxEvents > Companion App

I see the Axis companion app can react to both VMD and Axis Object Analytics events. Is there a way for 3th party developers to raise their own events that can be consumed in this app? Would be great to utilize this push notification feature of it.

If not, can I push a event/message to either the VMD or the AOA topic?

image

Could not allocate tensors

could not allocate tensors

Hello! I have an issue with the tensorflow-to-larod example.
When I run the converted_model_edgetpu.tflite file that is available in the example, it works fine, but if I take the converted_model.tflite from the example and compile it into an edgetpu and run it on the camera, I get the issue on the picture above.
The edgetpu file in the example is 1.721 KB and the one I compiled is 1725 KB.
I have tried to compile both inside and outside the container, but ends up wtih the same result.
I'm on a Windows computer and I use an AXIS Q1615 Mk III Network Camera.

Where to find allowed TYPES for ax_parameter_add

Hello!
I am looking for information where I can find allowed types for adding parameters via gboolean ax_parameter_add ( AXParameter * parameter, const gchar * name, const gchar * initial_value, const gchar * type,   GError ** error). At this moment I found three types: string, int and password:string. I am looking specifically for a type that supports bool which will give me a checkbox in GUI. I ask because the documentation says Refer to the SDK documentation for information about parameter types but I have no idea which SDK.

AXHttp - Querystring for non-GET requests

I'm using the AXHttp, lets say I have a product.cgi which I would like to use in a RESTfull way with routes like:

[POST] /local/MyApp/product.cgi
[GET] /local/MyApp/product.cgi?id=1
[PUT] /local/MyApp/product.cgi?id=1

Is it correct that the last PUT call cannot be implemented because it mixes QueryString parameters with a body payload? And if not, how do I access querystring parameters when a request payload is send?

Also:

  1. is there a way to access the request HTTP headers?
  2. Why can't I use methods like PUT?

ACAP v4 examples

I'm just wondering if you have any plans for providing examples for ACAP v4? I'm particular interested in golang examples.

Br,
..Anders

How to include/import or link third-party libraries into an ACAP application?

Now, I'm working on a project, using Axis Encoder to send video stream to AWS Kinesis Video Stream (KVS). So I want to implement an ACAP Application to handle it. The KVS service has a Kinesis Producer SDK (written by C). I build the SDK to a library by CMake.
Now I want to include/link and import that library files into our ACAP application and use them in my source code (also in C). But I don't know what is exactly the steps and how I need to do to get it to work.
Could you tell me how I could include/import one third-party library into an ACAP application?
Edit: This is the error log when I try to install the ACAP application file:
image

vdo-larod object coordinate

As by given code

uint8_t maxProb = 0;
size_t maxIdx = 0;
uint8_t* outputPtr = (uint8_t*) larodOutputAddr;
for (size_t j = 0; j < args.outputBytes; j++) {
    if (outputPtr[j] > maxProb) {
        maxProb = outputPtr[j];
        maxIdx = j;
    }
}
if (labels) {
    if (maxIdx < numLabels) {
        syslog(LOG_INFO, "Top result: %s with score %.2f%%", labels[maxIdx],
               (float) maxProb / 2.5f);
    } else {
        syslog(LOG_INFO, "Top result: index %zu with score %.2f%% (index larger "
               "than num items in labels file)",
               maxIdx, (float) maxProb / 2.5f);
    }
} else {
    syslog(LOG_INFO, "Top result: index %zu with score %.2f%%", maxIdx,
           (float) maxProb / 2.5f);
}

how can we get the coordinates of detected object?

Expand AxParameters library

Hi, as I mentioned in #48 I would like to suggest a small improvement:

I would like to suggest extending the parameters library to give developers the possibility to make groups of app's settings (the same way as in the settings of the camera). I am working on an app that takes many properties and I don't want to spend my time doing a special configuration page.

Actually, the AxParameters library has all types I need, the only problem is that the list of my app's properties is quite long (and will grow in the future). So, to make all properties easier to discover and manage I would like to get the possibility to group them.

What do you think about that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.