Introduction: Recycle Sorting Robot

Did you know that the average contamination rate in communities and businesses range up to 25%? That means one out of every four pieces of recycling you throw away doesn’t get recycled. This is caused due to human error in recycling centers. Traditionally, workers will sort through trash into different bins depending on the material. Humans are bound to make errors and end up not sorting the trash properly, leading to contamination. As pollution and climate change become even more significant in today’s society, recycling takes a huge part in protecting our planet. By using robots to sort through trash, contamination rates will decrease drastically, not to mention a lot cheaper and more sustainable. To solve this, I created a recycle sorting robot that uses machine learning to sort between different recycle materials.

Step 1: Parts

Step 2: 3D Printed Parts

You will need to 3D print all the parts for the robotic arm. You can find all the files here.

Step 3: Code

Please clone my GitHub repository to follow along with this tutorial.

Step 4: Getting the Data

To train the object detection model that can detect and recognize different recycling materials, I used the trashnet dataset which includes 2527 images:

  • 501 glass
  • 594 paper
  • 403 cardboard
  • 482 plastic
  • 410 metal
  • 137 trash

The picture above is an example of one on the images from the dataset.

This dataset is very small to train an object detection model. There are only about 100 images of trash that are too little to train an accurate model, so I decided to leave it out.

You can use this google drive folder to download the dataset. Make sure to download the dataset-resized.zip file. It contains the set of images that are already resized to a smaller size to allow for faster training. If you would like to resize the raw images to your own liking, feel free to download the dataset-original.zip file.

Step 5: Labeling the Images

Next, we need to label several images of different recycling materials so we can train the object detection model. To do this, I used labelImg, a free software that allows you to label object bounding boxes in images.

Label each image with the proper label. This tutorial shows you how. Make sure to make each bounding box as close to the border of each object to ensure the detection model is as accurate as possible. Save all the .xml files into a folder.

The photo above shows how to label your images.

This is a very tedious & mind-numbing experience. Thankfully for you, I already labeled all the images for you! You can find it here.

Step 6: Training

In terms of training, I decided to use transfer learning using Tensorflow. This allows us to train a decently accurate model without a large amount of data.

There are a couple of ways we can do this. We can do it on our local desktop machine on the cloud. Training on our local machine will take a super long time depending on how powerful your computer is and if you have a powerful GPU. This is probably the easiest way in my opinion, but again with the downside of speed.

There are some key things to note about transfer learning. You need to make sure that the pre-trained model you use for training is compatible with the Coral Edge TPU. You can find compatible models here. I used the MobileNet SSD v2 (COCO) model. Feel free to experiment with others too.

To train on your local machine, I would recommend following Google's tutorial or EdjeElectronics tutorial if running on Windows 10. Personally, I have tested EdjeElectroncs tutorial and reached success on my desktop. I can not confirm if Google's tutorial will work, but I would be surprised if it doesn't.

To train in the cloud, you can use AWS or GCP. I found this tutorial that you can try. It uses Google's cloud TPU's that can train your object detection model super fast. Feel free to use AWS as well.

Whether you train on your local machine or in the cloud, you should end up with a trained tensorflow model.

Step 7: Compiling the Trained Model

In order for your trained model to work with the Coral Edge TPU, you need to compile it.

Above is a diagram for the workflow.

After training, you need to save it as a frozen graph (.pb file). Then, you need to convert it into a Tensorflow Lite model. Note how it says "Post-training quantization". If you used the compatible pre-trained models when using transfer learning, you don't need to do this. Take a look at the full documentation on compatibility here.

With the Tensorflow Lite model, you need to compile it to an Edge TPU model. See details on how to do this here.

Step 8: Recycle Detection Model

If you don't want to go over the hassle of training, converting, and compiling the object detection model, check out my recycle detection model here.

Step 9: Deploy the Model

The next step is to set up the Raspberry Pi (RPI) and Edge TPU to run the trained object detection model.

First, set up the RPI using this tutorial.

Next, set up the Edge TPU following this tutorial.

Finally, connect the RPI camera module to the raspberry pi.

You are now ready to test your object detection model!

If you cloned my repository already, you will want to navigate to the RPI directory and run the test_detection.py file:

python test_detection.py --model recycle_ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/detect_edgetpu.tflite --labels recycle_ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/labels.txt

A small window should pop up and if you put a plastic water bottle or other recycle material, it should detect it like the image above.

Press the letter "q" on your keyboard to end the program.

Step 10: Build the Robotic Arm

The robotic arm is a 3D printed arm I found here. Just follow the tutorial on setting it up.

The image above shows how my robotic arm turned out.

Make sure you connect the servo pins to the according to Arduino I/O pins in my code. Connect the servos from bottom to top of the arm in this order: 3, 11, 10, 9, 6, 5. Not connecting it in this order will cause the arm to move the wrong servo!

Test to see it working by navigating to the Arduino directory and running the basicMovement.ino file. This will simply grab an object that you place in front of the arm and drop it behind.

Step 11: Connecting the RPI & Robotic Arm

We first need to mount the camera module to the bottom of the claw. The image above shows how it should look.

Try to align the camera as straight as possible to minimize errors in grabbing the recognized recycle material. You will need to use the long camera module ribbon cable as seen in the materials list.

Next, you need to upload the roboticArm.ino file to the Arduino board.

Finally, we just have to connect a USB cable between the RPI's USB port and the Arduino's USB port. This will allow them to communicate via serial. Follow this tutorial on how to set this up.

Step 12: Final Touches

This step is completely optional but I like to put all my components into a nice little project box.

The images above shows how it looks.

You can find the project box on the materials list. I just drilled some holes and used brass standoffs to mount the electronics. I also mounted 4 cooling fans to keep a constant airflow through the RPI and TPU when hot.

Step 13: Running

You are now ready to power on both the robotic arm and RPI! On the RPI, you can simply run the recycle_detection.py file. This will open a window and the robotic arm will start running just like in the demo video! Press the letter "q" on your keyboard to end the program.

Feel free to play around with the code and have fun!

Step 14: Future Work

I hope to use R.O.S. to control the robotic arm with more precise movements. This will enable more accurate picking up of objects.

Step 15: Questions?

Feel free to leave a comment below if you have any questions!

Raspberry Pi Contest 2020

Participated in the
Raspberry Pi Contest 2020