Giter Club home page Giter Club logo

modelassistant's Introduction

SenseCraft Model Assistant by Seeed Studio

English | ็ฎ€ไฝ“ไธญๆ–‡

Introduction

Seeed SenseCraft Model Assistant (or simply SSCMA) is an open-source project focused on embedded AI. We have optimized excellent algorithms from OpenMMLab for real-world scenarios and made implementation more user-friendly, achieving faster and more accurate inference on embedded devices.

What's included?

Currently we support the following directions of algorithms:

๐Ÿ” Anomaly Detection

In the real world, anomalous data is often difficult to identify, and even if it can be identified, it requires a very high cost. The anomaly detection algorithm collects normal data in a low-cost way, and anything outside normal data is considered anomalous.

๐Ÿ‘๏ธ Computer Vision

Here we provide a number of computer vision algorithms such as object detection, image classification, image segmentation and pose estimation. However, these algorithms cannot run on low-cost hardware. SSCMA optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices.

โฑ๏ธ Scenario Specific

SSCMA provides customized scenarios for specific production environments, such as identification of analog instruments, traditional digital meters, and audio classification. We will continue to add more algorithms for specified scenarios in the future.

Features

๐Ÿค User-friendly

SSCMA provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process.

๐Ÿ”‹ Models with low computing power and high performance

SSCMA focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to ESP32, some Arduino development boards, and even in embedded SBCs such as Raspberry Pi.

๐Ÿ—‚๏ธ Supports multiple formats for model export

TensorFlow Lite is mainly used in microcontrollers, while ONNX is mainly used in devices with Embedded Linux. There are some special formats such as TensorRT, OpenVINO which are already well supported by OpenMMLab. SSCMA has added TFLite model export for microcontrollers, which can be directly converted to TensorRT, UF2 format and drag-and-drop into the device for deployment.

Application Examples

Object Detection

Pointer Meter Recognition

Digital Meter Recognition

More application examples can be found in Model Zoo

Acknowledgement

SSCMA referenced the following projects:

License

This project is released under the Apache 2.0 license.

modelassistant's People

Contributors

alter-y avatar dependabot[bot] avatar ichizer0 avatar lakshanthad avatar lynnl4 avatar milk-bios avatar mjq2020 avatar pillar1989 avatar unbinilium avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

modelassistant's Issues

Following the TinyML workshop, USER Defined 1 doens't appear

Describe the bug
Hi! I've been following the TinyML workshop with 2 Xiao ESP32S3 Sense boards. While the board is "online" and recognized from the browser and app, I don't the "user-defined-1" model appearing. It's still: no data

Environment
Environment you use when bug appears:
I'm using online Edgelab version , will try to install everything on my pc though

Additional context
the problem is happening on both XIao I have

Feature Request: Docs to show how to use the Arduino IDE to install the EdgeLab bootloader and also how to get back to Arduino

Feature Request: Docs to show how to use the Arduino IDE to install the EdgeLab bootloader and also how to get back to the Arduino XIAO bootloader

Motivation
Many of the people I teach will want to try this method of using the XIAO-Esp32-S3-Sense for machine learning but will also want to go back to the original shipped version of the Arduino Bootloader on the XIAO.

Issue
Without the EdgeLab bootloader I can not connect my XIAO-Esp32-S3-Sense to either:

The web-app version
https://seeed-studio.github.io/edgelab-web-app/#/dashboard/workplace

or The ESP flash tool
https://espressif.github.io/esptool-js/

Related resources
My version of using a web browser to serial connect to an Arduino like MCU and do machine learning is at https://hpssjellis.github.io/tinyMLjs/public/index.html

Additional context
I have met the Seeedstudio founder Eric Pan, he will know me as Jeremy Ellis from the ICTP conference in Trieste Italy.

Thank you for considering my request.

Feature request: Have an upload check button for each binary and the TFLITE file.

@LynnL4

Feature request: Have an upload check button for each binary and the TFLITE file

I really like the new Sensecraft at https://seeed-studio.github.io/SenseCraft-Web-Toolkit/#/dashboard/workplace. I am having about 80% success loading the different examples. I even bricked my XIAO briefly losing the COM port until I rebooted the computer and put the XIAO into bootloader mode and re-installed an Arduino program.

Reason for Feature Request

It is very difficult to tell if the binaries have been uploaded successfully and in the correct locations since ML programming is so complex on a good day. A button beside each upload that just checks if it has been installed correctly would be very useful. (I am not sure how difficult that would be to program so just ignore this if it is too hard to do.)

I was surprised that the installation locations have been removed (0x1000, 0x8000, 0x10000, 0x400000). How does the code decide which binary to install into which location? Do they have to be uploaded in a specific order? bootloader.bin, partition-table.bin, edgeLab.bin or is it just based on the correct name? I personally prefer knowing what location the file is uploading to, but that would be an advanced setting. I really like how simple the dashboard is to use.

Segmentation support for models

Dear ML enthusiasts and Seeed employees.
I trust this message finds you well and thriving.

Describe the feature

Currently, the primary focus of the YOLOv5 -models implementation in SSCMA pertains to object detection, classification, and bounding box predictions tasks.

Yet, there is a substantial potential for the incorporation of segmentation, a concept relating to the identification of pixel-level masks for objects present within an image or video.

In essence, this could facilitate understanding of the extent of the sensor/image occupied by the identified label, its corresponding coordinates and resulting pixel-mask.

Motivation

The implications of this feature are discussed through prototype example use-cases associated with Grove-AI and SenseCap A1101 for Seeed Studios clientele, engrossing interest in both LoRaWAN applications and real-time computer vision, thereby yielding collective advantages. Leaving the implementation and use of the pixel-mask to the firmware gives vast uses to make use of the date on the edge, even in LoRaWAN one can then uplink the solution derived from the pixel-mask. Or uplink the pixel-mask coordinate-path depending on the payload size.

Precision Agriculture: Farming scenarios can leverage segmentation to differentiate lands based on productivity, crop variety, and yield forecasts, capitalizing on sensors capable of extracting pixel-masks, thus presenting pragmatic agricultural process applications.

Industrial Inspection: Experimental laboratory settings may utilize pixel-mask extraction for inspecting PCB wafers before deploying costly machine vision cameras.

Volume Calculation: With a model trained for multiple area and object considerations, pixel-mask detection can be employed provisionally to compute the volumetric occupation of objects.

Smart Cities: Segmentation can validate appropriately sized vehicles occupying specific spaces โ€“ for example, confirming an SUV-spot being occupied by a U-Haul truck.

Industry: Large shipping yards and industrial areas can deploy segmentation to enforce safety regulations by detecting objects breaching OSHA rules utilising more advanced computer vision than just object detection.

Remote Sensing: Suitable drones could use pixel-masks to detect and monitor changes in land usage, produce, deforestation, and urban sprawl.

My brief research ( N/A SSCMA )

In preparation for writing to Seeed-Studio, I have examined Seeed-Studio's yolov5-swift fork and made notes on Yolov5โ€™s segmentation support that was added a few months later in the upstream through this sizeable commit - YOLOv5 segmentation model support.
I make no mistake to identify the significant changes and features that Seeed-studios team is maintaining on this fork, with TFLite implementations I understand where we are now and why this has not been a priority

Related resources ( N/A SSCMA )

The upstream feature discussion was very useful to get a briefing on how things were implemented v7.0 - YOLOv5 SOTA Realtime Instance Segmentation #10258
Official announcement Introducing Instance Segmentation in YOLOv5 v7.0

Most importantly the papers on the subject, I found no one specific paper unfortunately. Ultralytics is refering to COCO -implementation on their segmentation model. Instance Segmentation

I made use of Roboflow's documents and notebooks on using their online-tool to segment images dataset for training at What is YOLOv5 Instance Segmentation?. Roboflow is a easy ad-hoc tool, but not required to work on segmentation.

Addenum

After spending more time with SSCMA I think the topic is about mmyolo's segmentation implementations 15_minutes_instance_segmentation.md

I believe these improvements can further bolster the functionalities and applicability of SSCMA and expand its reach across multiple verticals. I am deeply appreciative of your work thus far on yolov5-swift, and I hope to contribute in providing the best proposition through this suggestion.

I eagerly anticipate your feedback regarding the proposal. Following my prior formal submission, I feel compelled to expound upon my perspectives, and share these insights within our community.

Best Regards,

Aciid

Checking that the ONXX export is successful for pfld_mbv2n_112.py fails with AttributeError

Describe the bug
Checking that the ONXX export is successful for configs/pfld/pfld_mbv2n_112.py fails with AttributeError: 'MeterData' object has no attribute '_metainfo'. Did you mean: 'metainfo'?

Environment
Environment you use when bug appears:

  1. Python version: 3.10

  2. PyTorch Version: torch==2.0.1

  3. MMCV Version: 2.0.1

  4. EdgeLab Version: na

  5. Code you run
    python3 tools/inference.py configs/pfld/pfld_mbv2n_112.py work_dirs/pfld_mbv2n_112/epoch_1_float32.onnx --cfg-options data_root=datasets/meter/

  6. The detailed error

Traceback (most recent call last):
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/inference.py", line 352, in <module>
    main()
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/inference.py", line 336, in main
    runner = Infernce(
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/utils/inference.py", line 289, in __init__
    self.class_name = dataloader.dataset._metainfo['classes']
AttributeError: 'MeterData' object has no attribute '_metainfo'. Did you mean: 'metainfo'?

Additional context
Running on Mac M2, torch cpu-only, mmcv compiled from source

Verifying TFlite export of fomo_mobnetv2_0.35_x8_abl_coco fails with IndexError

Describe the bug
Trying to verify the TFlite export of fomo_mobnetv2_0.35_x8_abl_coco fails with:

IndexError: index 6 is out of bounds for dimension 2 with size 3

This is strange because the model had been verified after training without problems.

Environment
Environment you use when bug appears:

  1. Python version: 3.10
  2. PyTorch Version: torch==2.0.1
  3. MMCV Version: 2.0.1
  4. EdgeLab Version: na
  5. Code you run
    python3 tools/inference.py configs/fomo/fomo_mobnetv2_0.35_x8_abl_coco.py "$(cat work_dirs/fomo_mobnetv2_0.35_x8_abl_coco/last_checkpoint | sed -e 's/.pth/_int8.tflite/g')" --show --cfg-options data_root="datasets/coco_mask"
  6. The detailed error
Traceback (most recent call last):
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/inference.py", line 352, in <module>
    main()
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/inference.py", line 348, in main
    runner.test()
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/utils/inference.py", line 397, in test
    self.evaluator.process(data_batch=data, data_samples=data['data_samples'])
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/evaluator/evaluator.py", line 60, in process
    metric.process(data_batch, _data_samples)
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/evaluation/fomo_metric.py", line 74, in process
    tp, fp, fn = multi_apply(self.compute_ftp, preds, target)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmdet/models/utils/misc.py", line 219, in multi_apply
    return tuple(map(list, zip(*map_results)))
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/evaluation/fomo_metric.py", line 44, in compute_ftp
    if site in preds_index and preds_max[site.chunk(3)] == target_max[ti.chunk(3)]:
IndexError: index 6 is out of bounds for dimension 2 with size 3

Additional context
Add any other context about the problem here.

Running on Mac M2, torch cpu-only, mmcv compiled from source

Training pfld_mbv2n_112 gives `ValueError: operands could not be broadcast together`

Describe the bug
Training with config configs/pfld/pfld_mbv2n_112.py --cfg-options data_root=datasets/meter/ gives ValueError: operands could not be broadcast together with shapes (1638,3) (6,)

Environment
Environment you use when bug appears:

  1. Python version: 3.10
  2. PyTorch Version: torch==2.0.1
  3. MMCV Version: 2.0.1
  4. EdgeLab Version: na
  5. Code you run
    python tools/train.py configs/pfld/pfld_mbv2n_112.py --cfg-options data_root=datasets/meter/ epochs=10
  6. The detailed error
Traceback (most recent call last):
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/train.py", line 226, in <module>
    main()
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/train.py", line 221, in main
    runner.train()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1777, in train
    model = self.train_loop.run()  # type: ignore
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/runner/loops.py", line 96, in run
    self.run_epoch()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
    self.run_iter(idx, data_batch)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/runner/loops.py", line 128, in run_iter
    outputs = self.runner.model.train_step(
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/models/detectors/fomo.py", line 99, in train_step
    losses = self._run_forward(data, mode='loss')  # type: ignore
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
    results = self(**data, mode=mode)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/models/detectors/fomo.py", line 70, in forward
    return self.loss(inputs, data_samples)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmdet/models/detectors/single_stage.py", line 78, in loss
    losses = self.bbox_head.loss(x, batch_data_samples)
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/models/heads/fomo_head.py", line 112, in loss
    loss = self.loss_by_feat(pred, batch_gt_instances, batch_img_metas, batch_gt_instances_ignore)
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/models/heads/fomo_head.py", line 147, in loss_by_feat
    loss, cls_loss, bg_loss, P, R, F1 = multi_apply(self.lossFunction, preds, target)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmdet/models/utils/misc.py", line 219, in multi_apply
    return tuple(map(list, zip(*map_results)))
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/models/heads/fomo_head.py", line 185, in lossFunction
    P, R, F1 = self.get_pricsion_recall_f1(preds, data)
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/sscma/models/heads/fomo_head.py", line 231, in get_pricsion_recall_f1
    if site in preds_index:
ValueError: operands could not be broadcast together with shapes (1638,3) (6,) 

Additional context
Running on Mac M2, torch cpu-only, mmcv compiled from source
This line in ModelAssistant/sscma/models/heads/fomo_head.py is suspicious:

site = np.concatenate([ti, po], axis=0)

up until that line the sizes where compatible. Changing that line to a np.sum of arrays create another problem later.

Firmware for Grove AI is 0 Byte

Hi,

I realized the firmware for the Grove Vision AI Sensor has a size of 0 Byte. When copying the fw and the model the sensor disconnects correctly after saving the files. However, I suspect that the firmware has not been updated and is still the old one:

  • If I upload a firmware with which I am not able to see the camera output with and then upload your firmware and model afterwards I also can't see the camera output
  • However if I upload a firmware that allows me to see camera output on the webtool and then upload your firmware and model I'm able to see the output

Also, if I see output I only see the image without bounding boxes or any detection result.
Might this be due to the firmware or am I overlooking something?

Training of swift_yolo_tiny_1xb16_300e_coco gives "list index out of range"

Describe the bug
When training with config swift_yolo_tiny_1xb16_300e_coco after 5 epochs I get IndexError: list index out of range

Environment
Environment you use when bug appears:

  1. Python version: 3.10
  2. PyTorch Version: torch==2.0.1
  3. MMCV Version: 2.0.1
  4. EdgeLab Version: na
  5. Code you run:
    python tools/train.py configs/swift_yolo/swift_yolo_tiny_1xb16_300e_coco.py --cfg-options data_root=datasets/digital_meter/ epochs=10
  6. The detailed error
Traceback (most recent call last):
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/train.py", line 226, in <module>
    main()
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/train.py", line 221, in main
    runner.train()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1777, in train
    model = self.train_loop.run()  # type: ignore
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/runner/loops.py", line 102, in run
    self.runner.val_loop.run()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/runner/loops.py", line 374, in run
    metrics = self.evaluator.evaluate(len(self.dataloader.dataset))
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate
    _results = metric.evaluate(size)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate
    _metrics = self.compute_metrics(results)  # type: ignore
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 419, in compute_metrics
    result_files = self.results2json(preds, outfile_prefix)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 239, in results2json
    data['category_id'] = self.cat_ids[label]
IndexError: list index out of range

Additional context
Running on Mac M2, torch cpu-only, mmcv compiled from source

Feature Request: Web Streaming Audio

Hi @LynnL4 can you make a suggestion for which repository I would put this feature request.

Feature Request: Web Streaming audio.

Like the XIAO esp32s3 Camera streaming webpage does seeedstudio have Arduino code for a streaming audio webpage?

Better yet can the streaming audio be done at the same time as the streaming visual all on the same webpage? Probably not but I thought I would ask.

Jeremy

Is ModelAssistant still support ESP32-S3-EYE?

I followed the procedures outlined in SSCMA and Edgelab to train and deploy FOMO. I noticed that the current deployment method is through SenseCraft-Web-Toolkit. Does this mean that the current version no longer supports ESP32-S3-EYE?

Grove and XIAO-esp32S3 startup for RoboCar

@LynnL4 @stevehuang82

image

An old link that steve helped with
HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2#3 (comment)

My Main RoboCar site, for wehich I used the Arduino PortentaH7 is at https://github.com/hpssjellis/robocar

My RoboCar has a few strange effects, that I can probably figure out myself but thought I should share what I am having issues with as they might be bugs in your setup.

  1. On startup there is about a 4 second lag before anything happens.
  2. The car runs full speed after the time lag
  3. Serial monitor shows errors until an object is detected, then the serial monitor prints out correct readings.
  4. When the first object is detected the car stops running at full speed and starts working fairly well
  5. AI.boxes()[0].score seems to keep the last reading instead of getting zeroed each loop.
  6. I am expecting a classification above 85% but I rarely see anything below 78% if I wanted a lower classification value can I set that from the XIAO side or is the lowest acceptable value pre-programmed into the Grove Model?

I am very new to programming the Grove vision AI V2 using the XIAO-esp32s3, can you see if there is anything obviously incorrect with my code.
"No worries if you don't have time, I will be debugging over the next few weeks"

The code is fairly obvious it has one servo motor that turns left, center or right. The drive motor simple stops or starts at a preset speed.

/*
 * Connections XIAO-esp32s3 to Grove Vision AI V2 
 * GND to GND
 * 3V3 to 3V3
 * SDA (D4) to SDA grove
 * SCL (D5) to SCL Grove
 * 
 * 
 * Coonections XIAO-esp32S3 to Servo
 * D2 to Orange wire
 * 
 * Connections XIAO-esp32S3 to Big Motor Driver
 * D0 to top left 1   Digital turn
 * D6 to top left 3    PWM motor speed
 * D1 to top left 6     digital turn
 * 3V3 to top left 7
 * GND to top left 8
 * 
 * 
 * 
 * 
 */



#include <Seeed_Arduino_SSCMA.h>
#include <ESP32Servo.h>   // for XIAO-ESP32S3-Sense


SSCMA AI;
Servo myServo_D2;
int myMainSpeed = 37;   // slowest speed that the car moves on a charged battery



void setup(){
    AI.begin();
    Serial.begin(115200);
    
    // Stay away from pins D4 (SDA) and D5 (SCL) whiuch are used for I2C communication with the Grove board
    // D3 when both are connected is the reset pin so that messes things up.

    myServo_D2.attach(D2); // D2 should do PWM on XIOA
    pinMode(D6, OUTPUT);   // PWM 0 to 255
    pinMode(D1, OUTPUT);   // digital 0 to 1
    pinMode(D0, OUTPUT);   // digital 0 to 1

    analogWrite(D6, 0);      // have car stopped at beginning
                                // both off = glide, both on = brake (if motor can do that)
    digitalWrite(D0, 0);    // not needing to be attached
    digitalWrite(D1, 1);    // set one direction
}

void loop()
{
    if (!AI.invoke() ){

   
     AI.perf().prepocess;
     AI.perf().inference;
     AI.perf().postprocess;


     if (AI.boxes()[0].score > 85 ){
      Serial.print(String(AI.boxes()[0].score)+", ");
      analogWrite(D6, myMainSpeed);   // go medium  

      if( AI.boxes()[0].x < 100){
            Serial.println("Right");
            myServo_D2.write(110); // turn Right
      }
      else if(AI.boxes()[0].x >= 100 && AI.boxes()[0].x <= 150 ){

            Serial.println("Center");
            myServo_D2.write(90); // turn center
      }

      else if (AI.boxes()[0].x > 150) {
            Serial.println("Left");
            myServo_D2.write(70); // turn left
      }
     }
      else {
        Serial.println(String(AI.boxes()[0].score)+", None");
        analogWrite(D6, 0);   // No objects detected so stop
      }



    }

   
}

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/A9dCpjHPfE or add me on WeChat (ID: van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x ใ€1.xใ€2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 0.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Exporting model to TFlite fails with `quantized engine FBGEMM is not supported`

Describe the bug
Trying to export the model with config configs/pfld/pfld_mbv2n_112.py fails with RuntimeError: quantized engine FBGEMM is not supported
Environment
Environment you use when bug appears:

  1. Python version: 3.10
  2. PyTorch Version: torch==2.0.1
  3. MMCV Version: 2.0.1
  4. EdgeLab Version: na
  5. Code you run
    python3 tools/export.py configs/pfld/pfld_mbv2n_112.py work_dirs/pfld_mbv2n_112/epoch_1.pth --target tflite --cfg-options data_root=datasets/meter/
  6. The detailed error
Traceback (most recent call last):
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 509, in <module>
    main()
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 501, in main
    export_tflite(args, model, loader)
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 375, in export_tflite
    ptq_model = quantizer.quantize()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 530, in quantize
    qat_model = self.prepare_qat(rewritten_graph, self.is_input_quantized, self.backend, self.fuse_only)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3664, in prepare_qat
    self.prepare_qat_prep(graph, is_input_quantized, backend)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 714, in prepare_qat_prep
    self.prepare_qconfig(graph, backend)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3598, in prepare_qconfig
    torch.backends.quantized.engine = backend
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/torch/backends/quantized/__init__.py", line 33, in __set__
    torch._C._set_qengine(_get_qengine_id(val))
RuntimeError: quantized engine FBGEMM is not supported

Additional context
Running on Mac M2, torch cpu-only, mmcv compiled from source

Steps to make the binaries using the Arduino IDE and install them using SenseCraft

Steps to make the binaries using the Arduino IDE and install them using SenseCraft

@LynnL4

I always start projects from the basics. I would like to take a simple LED and Serial Print program and get in running on SenseCraft. Exporting the binaries for the app (0x10000), partition (0x8000) and bootloader (0x1000) from the Arduino IDE is easy, but installing the binaries using SenseCraft or the ESP tool does not seem to work.

If with some suggestions I can get that simple installation working I would then like to look into the edgeLab app and see how to get serial output working with SenseCraft.

P.S. @LynnL4 do you have my Arduino Library installed? The Portenta Pro Community Solutions? I will probably make a XIAO version of this library but most of the code already works with a few pin changes. The basic starting program I always use is called dot11-hello-blink.ino I would like to get it working first with just serial output and then with edgeLab so the serial output shows up on the senseCraft screen.

ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.

Describe the bug
A clear and concise description of what the bug is.

Environment
Environment you use when bug appears:

  1. Python version: 3.8
  2. PyTorch Version: 2.0.0+cu118
  3. MMCV Version: 2.0.1
  4. EdgeLab Version
  5. Code you run: python tools/train.py configs/fomo/fomo_person.py --cfg-options \work_dir=work_dirs/fomo_300 num_classes=1 epochs=300 height=416 width=416 data_root=datasets/coco_person/
  6. The detailed error

Traceback (most recent call last):
File "tools/train.py", line 226, in
main()
File "tools/train.py", line 221, in main
runner.train()
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 96, in run
self.run_epoch()
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
self.run_iter(idx, data_batch)
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 128, in run_iter
outputs = self.runner.model.train_step(
File "/home/ubuntu/Documents/ModelAssistant/sscma/models/detectors/fomo.py", line 99, in train_step
losses = self._run_forward(data, mode='loss') # type: ignore
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
results = self(**data, mode=mode)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/Documents/ModelAssistant/sscma/models/detectors/fomo.py", line 70, in forward
return self.loss(inputs, data_samples)
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmdet/models/detectors/single_stage.py", line 78, in loss
losses = self.bbox_head.loss(x, batch_data_samples)
File "/home/ubuntu/Documents/ModelAssistant/sscma/models/heads/fomo_head.py", line 115, in loss
loss = self.loss_by_feat(pred, batch_gt_instances, batch_img_metas, batch_gt_instances_ignore)
File "/home/ubuntu/Documents/ModelAssistant/sscma/models/heads/fomo_head.py", line 150, in loss_by_feat
loss, cls_loss, bg_loss, P, R, F1 = multi_apply(self.lossFunction, preds, target)
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmdet/models/utils/misc.py", line 219, in multi_apply
return tuple(map(list, zip(*map_results)))
File "/home/ubuntu/Documents/ModelAssistant/sscma/models/heads/fomo_head.py", line 188, in lossFunction
P, R, F1 = self.get_pricsion_recall_f1(preds, data)
File "/home/ubuntu/Documents/ModelAssistant/sscma/models/heads/fomo_head.py", line 229, in get_pricsion_recall_f1
site = np.sum([ti, po], axis=0)
File "<array_function internals>", line 200, in sum
File "/home/ubuntu/anaconda3/envs/model2/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 2324, in sum
return _wrapreduction(a, np.add, 'sum', axis, dtype, out, keepdims=keepdims,
File "/home/ubuntu/anaconda3/envs/model2/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.

Additional context
I'm using latest version of ModelAssistant .

after I change site = np.sum([ti, po], axis=0)in ModelAssistant/sscma/models/heads/fomo_head.pywith site = np.concatenate([ti, po], axis=0) it stat training,but I get another error when training the model

Traceback (most recent call last):
File "tools/train.py", line 226, in
main()
File "tools/train.py", line 221, in main
runner.train()
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 102, in run
self.runner.val_loop.run()
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 371, in run
self.run_iter(idx, data_batch)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 392, in run_iter
self.evaluator.process(data_samples=outputs, data_batch=data_batch)
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmengine/evaluator/evaluator.py", line 60, in process
metric.process(data_batch, _data_samples)
File "/home/ubuntu/Documents/ModelAssistant/sscma/evaluation/fomo_metric.py", line 74, in process
tp, fp, fn = multi_apply(self.compute_ftp, preds, target)
File "/home/ubuntu/.local/lib/python3.8/site-packages/mmdet/models/utils/misc.py", line 219, in multi_apply
return tuple(map(list, zip(*map_results)))
File "/home/ubuntu/Documents/ModelAssistant/sscma/evaluation/fomo_metric.py", line 48, in compute_ftp
confusion = confusion_matrix(
File "/home/ubuntu/.local/lib/python3.8/site-packages/sklearn/utils/_param_validation.py", line 214, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/sklearn/metrics/_classification.py", line 326, in confusion_matrix
y_type, y_true, y_pred = _check_targets(y_true, y_pred)
File "/home/ubuntu/.local/lib/python3.8/site-packages/sklearn/metrics/_classification.py", line 84, in _check_targets
check_consistent_length(y_true, y_pred)
File "/home/ubuntu/.local/lib/python3.8/site-packages/sklearn/utils/validation.py", line 407, in check_consistent_length
raise ValueError(
ValueError: Found input variables with inconsistent numbers of samples: [144, 24]

Additional context
if i set tp = fp = fn = 0 ,it works. but when I convert the model to binary and deploy it on the ESP32-S3 Eye, I encounter the following errors:

When I convert the model to binary and deploy it on the ESP32-S3 Eye, I encounter the following errors:

Didn't find op for builtin opcode 'TRANSPOSE'
Failed to get registration from op code TRANSPOSE
AllocateTensors() failed

and

W (4927) cam_hal: Failed to get the frame on time!

ERROR A stack overflow in task app_camera has been detected.

Backtrace: 0x40375b5e:0x3fca8680 0x4037cc1d:0x3fca86a0 0x4037f64a:0x3fca86c0 0x4037e2bf:0x3fca8740 0x4037f758:0x3fca8760 0x4037f74e:0xa5a5a5a5 |<-CORRUPTED
0x40375b5e: panic_abort at /home/ubuntu/esp/v5.1.2/esp-idf/components/esp_system/panic.c:452
0x4037cc1d: esp_system_abort at /home/ubuntu/esp/v5.1.2/esp-idf/components/esp_system/port/esp_system_chip.c:84
0x4037f64a: vApplicationStackOverflowHook at /home/ubuntu/esp/v5.1.2/esp-idf/components/freertos/FreeRTOS-Kernel/portable/xtensa/port.c:581
0x4037e2bf: vTaskSwitchContext at /home/ubuntu/esp/v5.1.2/esp-idf/components/freertos/FreeRTOS-Kernel/tasks.c:3729
0x4037f758: _frxt_dispatch at /home/ubuntu/esp/v5.1.2/esp-idf/components/freertos/FreeRTOS-Kernel/portable/xtensa/portasm.S:450
0x4037f74e: _frxt_int_exit at /home/ubuntu/esp/v5.1.2/esp-idf/components/freertos/FreeRTOS-Kernel/portable/xtensa/portasm.S:245

wrong .zip link on the seeedstudio getting started for Grove Vision AI V2

@LynnL4

If you go to this link

https://wiki.seeedstudio.com/grove_vision_ai_v2/

and go about 6 pages down and find the line that says

Windows CDC driver: CH343CDC.ZIP

it actually loads the other .zip file and should load this .zip file

https://files.seeedstudio.com/wiki/grove-vision-ai-v2/res/CH343CDC.ZIP

P.S I just got 2 Grove Vision AI V2 boards. I thought they were not working, but the USB-C cable takes a fair bit of force to plug in properly until you can hear the click. All is working great, what a great board.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.