Introduction: Intellectual Bottle Recycle Bin

I created this recycle bin along with Yeting Bao and Yuni Xie. Thanks for your devotion to this project:).

Use an easy-to-use machine learning tool to create an intellectual bottle recycle bin for the recycling department near your place: once you drop a bottle into the special bin, the screen beside it will show its material.


What we need is a box for bottles you want to recycle, a photon circuit with a microphone, a PC with a connection to the Internet, and a button(which we use an iPad).

Step 1: Take a Look at How It Works.

Step 2: Make a Box

Here we use four acrylic boards and one wood boards to form the box. You can use any material you want, but make sure they are strong enough to support thousands of times of dropping the bottles, and, of course, it must make sounds.

Step 3: Train Your Acoustic Machine Learning Model

Here, we use our recycle bin prototype to simulate throwing different types of bottles into a trash bin. By using the website teachable machine, we record different types of dropping sounds and extract the sound samples. And then using the Train Model to train the computer to recognize these different types of sounds. Don’t forget to export the model so it can be used on your website.

In this process, we collected dropping sound made by four types of bottles (plastic bottle, cans, paper box, glass) which are frequently used in daily life.

Step 4: Build Your Photon Circuit

Use a microphone and a speaker to connect the photon circuit, see the picture above. Don’t forget to connect it to power.

Troubleshoot Time

If you use other version of photon or Arduino circuit, you may be able to apply machine learning library “TensorFlowLite” to Photon. However, our version of photon doesn't serve such function. Instead, we use machine learning tool's javascript library.

In the meanwhile, our version of photon can't send audio to computer and analyze it in real-time. Therefore, we use “Speaker” npm package to play audio and analyze it in browser.

If you have another version of photon or Arduino, you might try some easier ways to send the audio to computer or apply machine learning library to your circuit.

Step 5: Serve Your Code on Computer

Use Node.js to serve the code to receive audio and play automatically. You can

You can find it in Github.

Here is the main code that we used in this step.

<p>...<br>// Save the wav file locally and play it when transfer is completed</p><p>
  socket.on('data', function (data) {
  // We received data on this connection.
  writer.write(data, 'hex');
  socket.on('end', function () {
  console.log('transmission complete, saved to ' + outPath);
  var file = fs.createReadStream(outPath);
  var reader = new wav.Reader();
  // the "format" event gets emitted at the end of the WAVE header
  reader.on('format', function () {
    // the WAVE header is stripped from the output of the reader
    reader.pipe(new Speaker(wavOpts));
  // pipe the WAVE file to the Reader instance

Step 6: Develop Your Visualization

Use javascript to send AJAX request to particle and control the function “open”. When the “open” function is called and the value is set to “1”, the microphone on the photon would be turned on and record for 3 seconds. The audio recorded will be sent to the computer and play automatically.

Once the computer received audio, the recognition will show up on the page.