Noodle is a small device with the I/O of a machine but the thoughts and feelings of a human. You can program it to monitor your physical space and react to changes in the environment with words, images, sounds and decisions.
For example, you could program it to watch your front door, and anytime someone arrives, determine if they look friendly or scary, and either let them in or call for help.
This Instructable contains three main sections: setting up the Raspberry Pi, connecting all the electronics, and creating the enclosure. Depending on what tools and materials you have access to, and how closely you want to recreate Noodle, feel free to only follow the first one or two sections. Noodle’s “brain” will still function without a body, and without all the components, or even in a different body if you’re interested in creating a new enclosure!
List of materials
We had the opportunity of creating Noodle at the Autodesk Pier 9 workshop space, which has pretty much everything you could possibly need to develop something like Noodle. But because the development process lasted over the course of a month in multiple parts of the workshop, it’s difficult to document every single item that we used. Basically, you should be able to recreate Noodle with access to a decent makerspace workshop that has some spare hardware for working with computers, and tools for basic electronics hacking.
Here are some of the parts we used (most of them available from Adafruit):
- Raspberry Pi Model B
- Raspberry Pi Camera
- 5V 1A (1000mA) USB port power supply - UL Listed
- 8GB Card with NOOBS 1.3
- USB cable - A/MicroB
- USB Powered Speakers
- Adafruit Assembled Pi Cobbler Breakout + Cable for Raspberry Pi
- Miniature WiFi (802.11b/g/n) Module: For Raspberry Pi and more
- NTSC/PAL (Television) TFT Display - 2.0" Diagonal
- USB Battery Pack for Raspberry Pi - 10000mAh - 2 x 5V @ 2A
- RCA (Composite Video, Audio) Cable 6 feet
- Blue Snowflake Microphone
- 4-port USB Hub
sparkfun: 6 - $9.00 - PRT-00437 - USB Male Type A Connector ($1.50 ea.)
- 5V to 12V boost converter
We also used a monitor, mouse, keyboard, and the wireless connection at Pier 9. We had our laptops to work with most of the time. After setting everything up, you can unplug the keyboard, mouse, and screen from the Pi and manipulate it from your laptop. For some of the devices above, they also needed USB cables for power or connectivity. If you are planning on making the enclosure relatively small, be prepared to strip, break, and reconnect the USB cables. It might be good to have extra USB connectors if this is your first time.
For fabrication we used:
- Soldering iron
- Wire stripper
- Wire cutter
- Hot glue gun
- Standard multimeter
- Bench power supply
- Epilog Legend 36EXT Laser Cutter
- Objet Connex500 3d Printer
Step 1: Setting Up the Raspberry Pi
The very first step is to plug some peripherals into the Raspberry Pi and then set up all the software on the Pi. After that, we can tear apart the peripherals and figure out how to get them into the enclosure.
Start by plugging the keyboard, mouse, and external monitor into the Pi. The Pi might need to update itself, so instead of plugging in the wireless adapter, try plugging it into your network with an ethernet cable at this point.
If your SD card doesn’t already have a copy of NOOBS on it, make sure to follow the instructions here before turning on the Pi.
Once you have an SD card with a copy of NOOBS, plug the SD card into the Pi and plug your 5V USB power adapter into the Pi to turn it on. For this project, we’ve selected to use the Raspbian OS. Select it from the NOOBS menu, and the commands will guide you through the rest of the installation process.
When you get to a blue screen that asks you to configure the options with which Raspbian is configured, there are two options you need to set: “ssh” should be enabled, and “camera” should be enabled.
After the installation process is complete, you’ll be greeted with a command line asking you to log in. Use the default username/password of pi/raspberry to log in. You might consider setting noodle to auto-login http://elinux.org/RPi_Debian_Auto_Login If you would like to use a GUI to navigate, type “startx”. But all of the following commands will work just fine from the text-only command line interface.
Another technique for editing and checking on the Pi files from another computer is to use the Pi as an external hard drive. If you follow these instructions.
You can set up the Pi as a Samba server, and then edit and view files from your laptop.
Some of the following steps require you to edit files. To do this you can either use nano on the command line, one of the text editing apps from the GUI, or edit the files on your laptop via the Samba disk.
Step 2: Working With the Camera
By enabling camera support during setup, we have access to two tools, “raspivid” and “raspistill”. To test the camera, turn off the Pi (type “sudo poweroff” and unplug it), plug in the camera by lifting the zero-insertion-force socket, making sure the pins from the cable are in contact with the pins on the socket and not backwards, then latch the socket in place and plug the Pi back in.
After booting up and logging in, type “raspistill -o image.jpg”. After the Pi takes a photo, you should be able to type “ls -lh” and see “image.jpg” next to a filesize like 2MB or 3MB.
Step 3: Working With the Miniature LCD Screen
By default, the current version of Raspbian outputs only to HDMI. In order to allow the Raspberry Pi to output to the composite video (yellow plug), you need to open the file /boot/config.txt and comment out the line reading “hdmi_force_hotplug=1” by adding a “#” to the beginning of the line.
Once that line is commented out, unplug the HDMI output and the next time you restart the Pi it will use the composite output. The composite output might not look perfectly centered, might not fill the screen, or it might be too high resolution, in which case you can check out this tutorial http://learn.adafruit.com/using-a-mini-pal-ntsc-display-with-a-raspberry-pi/configure-and-test showing how to set the resolution and the overscan (padding).
For now, power the LCD with bench power. Later we will get it running off USB power.
Step 4: Working With the Speakers
The Pi has a very basic built in pulse width modulation sound output. This means it can’t output high-quality sound the way your laptop computer can, but it can approximate sounds with very quick pulses of varying length. The speakers we used from Adafruit are USB powered, but this power can come from anywhere. If your Pi is out of USB ports at this point, you can either plug in the USB hub or use your laptop’s USB to power the speakers. To play sounds on your Pi you can use command line tools like aplay for .wav files, or omxplayer for far more file formats. The Pi should already have a few sounds on it, type this to play one of them: “omxplayer /usr/share/scratch/Media/Sounds/Effects/WaterDrop.mp3”. If you set up Samba, you can also try copying audio files from your computer to the Pi, and then using omxplayer to play them.
Step 5: Working With the Microphone
While the Pi has built in sound output, it doesn’t have any input. So instead of just plugging in a microphone, we have to use a device that’s a microphone with a built-in sound card (the Blue Snowflake). Fortunately, you don’t need to install any extra drivers to get the audio input working. After plugging in the microphone, you can test it with the arecord application. Try entering on the command line “arecord -D plughw:1,0 -d 5 -f S16_LE -c1 -r22050 -t wav out.wav”. This will record 5 seconds of audio from the microphone at 16 bit resolution and 22,050 Hz sample rate and save it to “out.wav”. Then try “omxplayer out.wav” to listen to the recorded audio. If you are using a different microphone or have a different hardware setup you might need to change the “plughw” options. The code autodetects your microphone from the USB device name.
Step 6: Working With Wifi
Setting up the wifi with the Pi is one of the things that’s easiest using a GUI rather than by editing the config file directly. Adafruit has instructions for both here:
The only difficulty we ran into was setting the encryption. The default encryption wasn’t the correct one for our network, so we opened our Network Preferences in OS X on our laptop, and looked for more information about our network under the 802.1X Tab after clicking on the “Advanced” Wi-Fi button. The correct setting for the Pi was an authentication type that was named similarly to the authentication type listed on OS X.
If you are using an unsecured network, or a network without 802.1X authentication, the tutorial above should be enough.
Step 7: Installing Node.js
The code for Noodle is built with two parts: a Node.js server that handles task management and the user interface, and Python functions that interface to the Pi hardware. The Pi already includes Python, but we need to install Node.
We followed some of the instructions from here:
First we entered the following series of commands at the command line to download Node, decompress the archive, create a directory for it, and move it to that directory:
tar xvzf node-v0.10.22-linux-arm-pi.tar.gz
sudo mkdir /opt/node
sudo cp -r node-v0.10.22-linux-arm-pi/* /opt/node
Finally, we edit the file “/etc/profile” by adding the following two lines before the “export PATH” line:
This makes it possible for the command line to know where to find node when we type “node”.
The tutorial above has some good advice about using a static IP address from your Pi also. Here’s a tutorial that’s only about using a static IP:
This is helpful because it means the Pi will have the same address when you turn it off and turn it back on. Otherwise, it might receive a different address from your router. Another approach instead of using a static IP is to set up the “raspberrypi.local” address, which is much easier to remember than an IP address:
Not all networks support the .local domain. For example, when we were working at Pier 9 we couldn’t access the Pi using “raspberrypi.local”.
Step 8: Installing Noodle and Dependencies
Fortunately git ships with Raspbian, so we can grab all the code from GitHub very easily.
1. Get the code from github:
git clone firstname.lastname@example.org:lmccart/noodle.git
2. Install node dependencies:
3. Install other dependencies with pip and apt-get:
sudo apt-get update
python packages sudo apt-get install python-pip
sudo pip install boto (amazon web services)
sudo pip install -U socketIO-client (https://pypi.python.org/pypi/socketIO-client)
sudo apt-get install python-pyaudio
4. Install espeak (used for generating speech from text):
wget https://pypi.python.org/packages/source/p/pyttsx/pyttsx-1.1.tar.gz gunzip pyttsx-1.1.tar.gz
tar -xf pyttsx-1.1.tar
sudo python setup.py install
sudo apt-get install espeak
Step 9: Connecting With Amazon Mechanical Turk Credentials
1. Sign up with Amazon Mechanical Turk as a requester:
Note that it takes ~24 hours for your account to get approved.
2. In addition to a requester account, you will need an Amazon Web Services account. Click the “Developer” tab, then click “Create AWS Account”.
3. Log into the panel http://aws.amazon.com/console/ and set up your access keys. In the upper right corner, click on your name, then “Security Credentials”
4. Choose “Access Keys”, then “Create New Access Key”. Download the file (this contains your Secret Access Key) and note the Access Key ID in the console. You will need both of these to login to noodle.
5. Choose “Services” in the upper left, and “S3” and make sure it’s activated.
Step 10: Interacting With Noodle
1. Start it running:
2. Visit XX.XXX.XX:3000. The first time you will be prompted to login. Use your AWS login credentials.
3. Create a task.
4. Manage your tasks. You can see all the tasks you’ve created and cancel them from XX.XXX.XX.XX:3000/manage.
5. That’s it. Noodle is ready to play!
Step 11: Some More About Amazon Mechanical Turk
In 2005, Amazon launched Mechanical Turk, a crowdsourcing (“Artificial Artificial Intelligence”) platform that made it possible to integrate human intelligence directly into software. The service was mostly aimed at businesses in need of quick, cheap, almost-mindless labor. It created a system for easily carrying out bulk tasks that would be difficult for a computer but very easy for a human. However, the platform raises many questions about the ethics and social impact of such a network, and the distribution of power within it. "I make $1.45 a week and I love it"
Artists have engaged with MTurk and explored the questions surrounding it. In Sheep Market, Aaron Koblin paid people $0.02 to draw a sheep facing left, and in Ten Thousand Cents, paid people $0.01 each to draw 1/10000th of a $100 dollar bill. In Social Turkers, I streamed my dates to the internet and paid people to watch and send me suggestions of what to say or do via text. Jeff Crouse’s Laborers of Love the user is directed through a series of questions about their sexual preferences. Once submitted, anonymous workers interpret each of the responses, searching the internet for the image or video they feel best matches, and presenting it to the user. Guido Segni’s Crowd Workers of the World Unite is a collection of 300+ commissioned spontaneous self portraits of cloud workers raising their middle finger.
Testing and troubleshooting workflow
MTurk is not primarily designed for real-time response, and it requires a bit of iteration and experimentation to get good quality results quickly. This blog post has some strategies for optimizing for speed and price, including being clear and explicit with prompts and questions, setting a high enough baseline pay, maintaining a consistent volume of jobs, and closely monitoring workers and responses. However, one of the main takeaways is that factors like time of day or week have much more effect than changes in price or question. Further, random variations in the system not tied to any factor generally outweigh most other factors, so some unpredictability has to be lived with. Also check out this paper from MIT CSAIL on optimizing realtime crowdsourcing, and this write-up on the research behind VizWiz, an app that allows blind people to use the crowd to identify visuals with real-time response.
Step 12: Electronics / Connecting the Components
Hopefully, if you’re jumping into this tutorial you have an innate sense for tearing things apart without much intention of putting them back together. But just in case, we’ve provided some notes here.
To open up the speaker, wedge a screwdriver into the face plate and pop it off. Then remove the four screws attaching the inner face to the back, remove the face, and unscrew the volume and power board. To get all the components free, we cut through the plastic to the wire hole with a mix of wire cutters and pliers, and wiggled the wire free.
The Blue Snowflake is incredibly satisfying to take apart. Start by unscrewing the front metal grating, then removing the four screws keeping the mic attached to the mount. The plastic backing to the mic will pop off, and you can remove the pop/windscreen. You should be left with two PCBs attached to each other via header pins.
Powering the screen from usb power
The LCD screen needs power, which you can either supply with a bench power supply, or via USB with the boost converter. The boost converter we used is tunable, so we started by cutting a USB cable in half, stripping all the wires, and checking which two wires have +5V. Then we soldered those two cables to the boost converter, and connected the multimeter to the output of the converter. Then we tuned the converter until the multimeter displayed +12V, which is a good voltage for the LCD screen. Finally, we wired the power cables from the LCD screen to the boost converter and now the LCD screen is USB powered! You can use the Pi or your computer to power the screen.
Making everything mobile
Up to this point we’ve been running off USB and wall power, but if we arrange our power connections correctly and incorporate the battery, we can make Noodle mobile. We found that there were some issues with our USB hub powering enough of the devices in some configurations. Our final configuration was to have the battery powering the Raspberry Pi and the LCD. The Raspberry Pi had used one USB port for the Wi-fi card, and the other port for the USB hub. All the other powered devices (speaker, mic, and temporarily a keyboard and mouse) were plugged into the USB hub.
Step 13: Notes on Measuring Things
The enclosure was designed with Rhino, which is currently available for free on OS X while they are testing and developing the app.
Before designing an enclosure, we had to measure every component we were sticking inside. In theory, different devices are built using their local measurement system (metric everywhere except for the USA). In practice, we found that every device came to a fairly round metric value in millimeters, even the Blue Snowflake microphone.
There are mainly two kinds of measurements we had to make: side lengths, and mounting hole positions. To make all these measurements we used a standard digital vernier caliper. For side measurements, it’s helpful to get a basic bounding box of the area that the component occupies. This will keep your components from hitting each other once you place them inside the enclosure. For the mounting hole positions, it’s important to accurately measure the hole diameters or you might create something that’s too loose or tight for the component. Instead of adding or removing small amounts from your measurements to account for design features like tight and loose fit, use your first pass at measuring to create a set of measurements that are as accurate as possible. Later it’s possible to revise these measurements after printing material tests.
When measuring mounting holes, measure them all from the same reference location rather than relative to one another. This keeps your measurement errors from accumulating.
The hardest component to measure was the Blue Snowflake which had three mounting holes in a circle that seemed to be at non-grid positions. To measure these, we measured the distance between the center of the three holes and created a triangle with those side lengths. In the end, we didn’t mount the Blue Snowflake using those holes.
Step 14: A Few Notes on Working With Rhino
This Instructable isn’t the right place to go into a thorough Rhino tutorial, but a few tips might be helpful.
This enclosure was almost completely designed using boolean operations on NURBS surfaces. So the first step was to create a box that was the right overall dimensions (150mm), then boolean intersect it with a sphere that provided the most satisfying curves. To hollow out this shape, it's necessary to create an offset surface or to simply subtract a smaller version. Unfortunately, Rhino won't let you remove one volume from the inside of another volume, they need to intersect. Also, creating an offset surface can have poor topology compared to the surfaces you get from constructing it from scratch. It's essential that the outer layer of the enclosure has good topology for making good connections to all the posts on the inside, so after testing a few different approaches we went with manually constructing a second smaller version from another cube and sphere, and creating the negative space for the single hole out of a cylinder. After boolean unioning the inner portion with the cylinder, we could boolean subtract it from the larger portion and maintain good topology. Later we realized a cylinder would mean that any cap would just fall in, so we repeated this steps using a cone instead so the hole's walls would have a slight tilt.
For all the Rhino files used for creating Noodle, see the zip file attached to this step.
Step 15: Prototyping With 123D Make and Cardboard
Once we had a basic design for Noodle's enclosure, we wanted to create an object that approximated the enclosure without using 1kg of expensive 3d printing material. To do this, we imported the enclosure into 123d Make, which sliced the model into chunks that could be laser cut and reassembled into a single 3d object. Once we had this cardboard prototype (held together with masking tape) we could just drop all the components inside and get a feeling for where they could fit best.
Four main things guided the placement of the components:
1. Overall weight distribution. The battery rests near the bottom of Noodle to help keep Noodle upright.
2. Support material: the 3d printer uses a significant amount of support material, and you have to remove the material after printing before the print is usable. Instead of designing solid platforms for supporting the battery, for example, we used posts that the support material could be removed around. This also reduced the weight of enclosure. Similarly, the battery holder has holes in it to allow the support material to be removed in large chunks.
3. Cabling: some cables inside Noodle are shorter than others and we needed to make sure everything would run the right distance. The relative placement of the Raspberry Pi to the camera module is the best example of this.
4. A playful aesthetic: we tried to give Noodle speakers for ears and other a microphone and camera for eyes. We considered using the screen as a mouth but found it to be difficult to integrate visually.
Step 16: Testing the Enclosure Components Before Printing the Whole Thing
After experimenting with the basic enclosure shape, we modeled the mounting posts in a separate file. This gave us the advantage of being able to print out a sheet of mounting posts in about 30 minutes, and we could test all our post placements without needing to run a complete 30 hour print. After some small revisions to post diameters, we imported the posts into the enclosure file and did a final boolean union, followed by some filleting on sharp and structural edges, before exporting the geometry.
Step 17: Enclosure Problems
Even with significant testing, some problems with the enclosure remain.
1. The walls on the battery enclosure were too thin. We recommend 3mm as a minimum for working with Objet VeroWhite when doing light structural design like this enclosure. We tried to design snap-locks on the side of the battery container, but they broke off after sliding the battery in the first time. Likewise, some sharp corners on the battery holder broke off while transporting Noodle. In general: expect VeroWhite to deform more than you expect when it is thick, but to also be more brittle than you expect and break when it's thin. Our guess for a reasonable width was based on printing a palette of objects with varying widths, but it would have been helpful to bend each of these to its breaking point to get a feeling for the material's properties.
2. We didn't model the step up converter correctly, and only measured a couple tall components instead of the complete bounding box. This means that when we tried to put the USB hub and the step up converter inside the box, they bumped into each other and only one would fit.
3. All the posts fit pretty well, as we found from our post test, but some were slightly too loose. In the end, we used hot glue to keep some of the components stuck to the mount points. We also tested some snap mounts but found VeroWhite to be too brittle at this scale to smoothly deform. With more testing, it may be possible to create snap mounts for holes like the ones on the Raspberry Pi.
Step 18: Cleaning
Cleaning support material from a 3d print as big as this can take about 45-60 minutes. We tried something unusual at the suggestion of a shop manager, and it seemed to work. First, we spent 15 minutes removing the largest chunks of support material by hand (using some soft wooden tools and our fingers in gloves). This got us through about half the support. Then we placed the enclosure in project bag (something like a 5 liter zipper storage bag) with room temperature water, and let it sit. One hour later, we removed the enclosure to find all the support flaking off (if you leave too little support on the model during this step, we've heard the water can cause the actual model material to weaken). After removing the largest flaking chunks, we put the model into a standard water cleaning station and spent another 30-45 minutes removing the remaining 25% or so of the support.
Step 19: Next Steps
Noodle is only in its infancy, with the basic framework and enclosure in place there is a lot left to build!
We spent a lot of time exploring computer vision on the Raspberry Pi, but in the short timeline we couldn’t incorporate any libraries. One of the best resources we found was this page describing how to use picamera with OpenCV:
The current code handles these features:
- detecting loud noise (trigger)
- playing sound file (action)
- text to speech playback (action)
- take picture (query)
Others we would like to add include:
- detecting pause / quiet (trigger)
- capture 5s audio (query)
- detect motion (trigger)
- detect face (trigger)
- detect light (trigger)
- detect dark (trigger)
- capture video (query)
- display text (action)
- display image (action)
- pin read (trigger)
- pin read (query)
- pin write (action)
- url ping (trigger)
- url ping (query)
- url ping (action)
How to extend / improve the code
You can fork the code here: https://github.com/lmccart/noodle, and submit pull requests to have your modifications and improvements merged in.