The annoying bar snacks, everybody hates them. Why? Because it is all mixed!
On Geekcon X, we have built a machine to tackle this problem. Our machine will sort it all into five categories (see category names in the next step) and one more "unidentified" category. This is done by using advanced computer vision and pattern recognition algorithms.
The machine uses 6 motors, one of which is part of the vibrator assembly to help spread the munchies on the conveyor belt. Enjoy your meal!
Step 1: The Computer Vision AI
Email us to get the complete source code. The picture on the left is our test data repository. These are classes we know to identify. On the right you see them automatically recognized (hebrew words for "disgusting", "super disgusting", and "circles").
The middle picture is actual photo taken by system's camera and then labeled by our code.
Step 2: The Vibrating Feeder Aseembly
The cone on the top of the system is where the munchies come in. They are stopped by whiskers motor which tries to shake them, hoping to separate the ones that sticked together. On top of the feeder board you see a vibrating motor (a motor with excentric mass attached to it). It is only turned on when there are no munchies falling for a second. When there are no munchies for 5 seconds (things are stuck!) the motor is turned all full power.
The vibrator is controlled by the main computer software thru Arduino board.
From the cone outlet, the munchies slide down onto the conveyor belt. The impact separates them further. Whenever munches are not touching each other on the conveyor belt, the computer vision-based machine will be able to separate them. Otherwise, they will go into "unrecognized" outlet.
Step 3: The Conveyor Belt and Smart Camera
Once released from the vibrating cone outlet, the munchies slide down and hit the rotating conveyor belt. This further separates them. They are shot at 30FPS by the camera, and the output is sent via bluetooth to the main computer.
The conveyor belt, in the first version is made from the two paper cups glued together with a duct tape. The aix is a skewer, held in place with hot glue. The hot glue also connects the skewer to the motor axis. Surprisingly, it holds all the torque!
Matlab code (ask us for the sources) extracts 3 parameters: ratio of area and circumreferrence, average color and the total area. Be sure there's no point light source anywhere above the system. The specular reflection may be recognized as food!
As a camera, a Nexus 7 cellphone was used. The picture is broadcast via bluetooth. The arduino controller board also communicates with the PC via bluetooth, so the PC does not have to be physically connected to the sorter.
Step 4: The Sorting Robot
The camera watches the conveyor belt, and the software not just identifies the type, but exact position of the center of mass of the munchie. Timing is deduced with a bunch of magic numbers (calibration!) so the box sorter knows what to do. The computer will turn the robot to the right box for the predicted time of arrival (ETA). It takes 0.3 seconds to readjust the robot in the current setup.
A small funnel near the conveyor belt end directs the stuff onto the sorter. The sorter is a 3d-printed [Designs: to be posted here] device operated by two angle-controlled servo-motors. The inner (black) part can turn into one of three positions and the whole thing can flip back and forth. This enables directing the munch stream into one of the six outlets. On the picture, boxes are bound by the green floor. Instead, we can connect 6 tubes (not tried within geekcon x) to throw the output into 6 different sacks.
It is easy to connect the tubes because the robot never makes a full turn. The tubes should be able to survive 180 degree turn, as well as 45 degree turn around the other (horizontal) axis.
Step 5: The Bright Future!
The future of QuickSorter project looks promising. Too bad we could not get enough time to put the transparent system into operation, but this is the externl structure, perfectly cut with help of Tom and Tal. The conveyor belt is made from a non-specular-reflecting material, so external lighting will not matter (thanks, Lee!) and what remains, is to put all the parts inside and calibrate locations and timings in the algorithm.