Introduction: Edge AI: Inference of Tensorflow Mobilenet SSD Model on BrainyPi

In modern manufacturing companies, there is a need for efficient and reliable classification and counting of packaged products as they pass on a conveyor belt. A well trained object detection model will make this highly efficient and minimize human error.

After training such an object detection model, it is equally crucial to be able to deploy it efficiently on the target device, a conveyor belt monitoring apparatus in this case. This is where Edge AI comes in.

Edge AI is nothing but the deployment of artificial intelligence algorithms and models directly onto remote devices or edge devices, such as smartphones, IoT etc. instead of relying on pulling in data from the cloud.

Supplies

  • BrainyPi
  • Linux OS Terminal

Step 1: Containerize

Docker containerization is a technique used to package software applications along with all their dependencies into standardized units called containers. Using Docker to run inference code on a Linux VM provides an efficient and consistent environment for deploying AI models, and developers can verify the proper model inference on docker before actual remote deployment on edge device. Refer to https://gitlab.iotiot.in/interns-projects/monitoring-assembly-line for steps to containerize your model.


Step 2: Remote Connection to BrainyPi

Brainy Pi is a new cutting-edge ARM Single Board Computer built for AI on Edge and IoT applications. It offers enterprise-grade support to transition your prototypes into fully customized hardware and software solutions. With its robust capabilities, Brainy Pi enables real-time AI processing directly at the edge.

We will be using secure shell (SSH) to remotely establish connection to BrainyPi. We establish connection using the following command:

ssh -X pi@auth.iotiot.in -p 65530

after entering the connection password we have successfully established connection to BrainyPi.

Step 3: Installing Dependencies

Next, we clone our repository, which contains all the necessary files like test images, tensorflow SavedModel, tfrecords, inference code, etc.

git clone https://gitlab.iotiot.in/interns-projects/monitoring-assembly-line.git

Move into the project folder, and install all necessary dependencies like tensorflow, opencv-python-headless, tensorflow-object-detection-api etc. Refer to the Dockerfile in above repository for all necessary dependencies.

Step 4:

Finally we create a new folder to save our output images, and run inference on our test images.