Skip to content

Seeed Studio EdgeLab is an open-source project focused on embedded AI. We optimize the excellent algorithms from OpenMMLab for real-world scenarios and make implemention more user-friendly, achieving faster and more accurate inference on embedded devices.

Notifications You must be signed in to change notification settings

Alter-Y/EdgeLab

 
 

Repository files navigation

Seeed Studio Edgelab

EdgeLab-logo-2

Introduction

Seeed Studio Edgelab is an open-source project focused on embedded AI. We have optimized excellent algorithms from OpenMMLab for real-world scenarios and made implemention more user-friendly, achieving faster and more accurate inference on embedded devices.

What's included

Currently we support the following directions of algorithms:

Anomaly Detection (coming soon) In the real world, anomalous data is often difficult to identify, and even if it can be identified, it requires a very high cost. The anomaly detection algorithm collects normal data in a low-cost way, and anything outside normal data is considered anomalous.
Computer Vision Here we provide a number of computer vision algorithms such as object detection, image classification, image segmentation and pose estimation. However, these algorithms cannot run on low-cost hardware. EdgeLab optimizes these computer vision algorithms to achieve good running speed and accuracy in low-end devices.
Scenario-Specific Specific scenarios, such as the recognition of analog meters, traditional digital meters and audio classfication.

We will keep adding more algorithms in the future. Stay tuned!

Features

User-friendly EdgeLab provides a user-friendly platform that allows users to easily perform training on collected data, and to better understand the performance of algorithms through visualizations generated during the training process.
Models with low computing power and high performance EdgeLab focuses on end-side AI algorithm research, and the algorithm models can be deployed on microprocessors, similar to ESP32, some Arduino development boards, and even in embedded SBCs such as Raspberry Pi.
Supports mutiple formats for model export TensorFlow Lite is mainly used in microcontrollers, while ONNX is mainly used in devices with Embedded Linux. There are some special formats such as TensorRT, OpenVINO which are already well supported by OpenMMlab. EdgeLab has added TFLite model export for microcontrollers, which can be directly converted to uf2 format and drag-and-drop into the device for deployment.

Experience Edgelab in 3 easy steps!

Now let's experience Edgelab in the fastest way by deploying it on Grove - Vision AI Module and SenseCAP A1101!

1. Download pretrained model and firmware
  • Step 1. We provide 2 different models for object detection and analog meter reading detection. Click on the model that you want to use to download it.

  • Step 2. We provide 2 different firmware for Grove - Vision AI and SenseCAP A1101. Click on the firmware that you want to use to download it.

2. Deploy model and firmware
  • Step 1. Connect Grove - Vision AI Module/ SenseCAP A1101 to PC by using USB Type-C cable
  • Step 2. Double click the boot button to enter boot mode
  • Step 3: After this you will see a new storage drive shown on your file explorer as GROVEAI for Grove - Vision AI Module and as VISIONAI for SenseCAP A1101
  • Step 4: Drag and drop the previous firmware.uf2 at first, and then the model.uf2 file to GROVEAI or VISIONAI

Once the copying is finished GROVEAI or VISIONAI drive will disapper. This is how we can check whether the copying is successful or not.

3. View live detection results
  • Step 1: After loading the firmware and connecting to PC, visit this URL

  • Step 2: Click Connect button. Then you will see a pop up on the browser. Select Grove AI - Paired and click Connect

Upon successful connection, you will see a live preview from the camera. Here the camera is pointed at an analog meter.

Now we need to set 3 points which is the center point, start point and the end point.

  • Step 3: Click on Set Center Point and click on the center of the meter. you will see a pop up confirm the location and press OK

You will see the center point is already recorded

  • Step 4: Click on Set Start Point and click on the first indicator point. you will see a pop up confirm the location and press OK

You will see the first indicator point is already recorded

  • Step 5: Click on Set End Point and click on the last indicator point. you will see a pop up confirm the location and press OK

You will see the last indicator point is already recorded

  • Step 6: Set the measuring range according to the first digit and last digit of the meter. For example, he we set as From:0 To 0.16
  • Step 7: Set the number of decimal places that you want the result to display. Here we set as 2

Finally you can see the live meter reading results as follows

Getting Started with EdgeLab

We provide end-to-end getting started guides for Edgelab where you can prepare datasets, train and finally deploy AI models to embedded edge AI devices such as the Grove - Vision AI Module and SenseCAP A1101 for different AI applications.

Here we introduce 2 different platforms to run the commands.

  • Linux PC with a powerful GPU
  • Google Colab workspace

The advantage of using Google Colab is that you can use any device having a web browser. In addition, it already comes with high performance GPUs for training. Use the below links to access the tutorials.

About

Seeed Studio EdgeLab is an open-source project focused on embedded AI. We optimize the excellent algorithms from OpenMMLab for real-world scenarios and make implemention more user-friendly, achieving faster and more accurate inference on embedded devices.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.2%
  • Shell 2.0%
  • Dockerfile 0.8%