The concept is simple on the surface; build an autonomous solar powered bird feeder that detects motion, snaps photos and uploads them to Twitter. The rest of this article details out the building of such a device and the things our team learned along the way.
We have open sourced the code for this project under the BSD license. You can download it from the project's git repository. The included README file cover general setup and configuration topics.
Step 1: Hardware List
Below is a list of the hardware we are using in the current Feeder Tweeter. We've also included some links to where you might purchase these items from. Of course if you already have similiar parts or think you have something that might work even better then feel free to experiment.
1) Bird Feeder - We settled on this feeder because the roof was sloped and removable. Our original design called for solar panels to be draped over the sides of the roof making them as unobtrusive as possible. We also liked that the sides where open giving us some space to mount various parts and run wires.
2) Raspberry Pi (Model A) - This is the brains of the device. We considered using an Arduino early on, but being that the project has such a heavy reliance on Internet connectivity and deals with large images we ultimately felt that the Raspberry Pi was the better choice. The cost of the Arduino WiFi Shield drove up the price of the project and the method by which software is loaded onto the Arduino board also limited our ability to auto-update software on the feeders after they had been released in the wild. We initially purchased model B of the Raspberry Pi, but in the end we utilized model A. Model A has less USB ports and comes without an Ethernet port. These changes help reduce the overall power consumption of the board by 200mA making it a better choice for our solar project.
3) Wireless USB Adapter - We like the Edimax EW-7811UN because it is has a very small footprint and supports 802.11n giving us boarder wireless range.
4) Silicone Glue - Since the feeder placement is outdoors this helps us waterproof all the openings to protect the electronic components.
5) Raspberry Pi Camera Module, Camera Mount, and Extension Kit - We experimented with a few cameras, but ultimately decided the standard camera module worked best for us.
6) Enclosure - We decided to mount an enclosure under the feeder to house most of the electronic components. This size turned out to be perfect since it was roughly the same size as the base of the feeder, was large enough to hold our selected components, and perhaps most importantly it was waterproof.
7) PC Board - Any board will do. We just happened to have this one laying around.
8) Solar Panel & Battery - There are plenty of options in this category. We chose not to spent a lot of time experimenting or trying to build our own panels from scratch. Instead we wanted something readily available and easy to work with. This is the one area we decided not to skimp the budget on. We selected the Goal Zero Boulder 15 solar panel and the Sherpa 50 Power Pack. They are produced by the same company and made to work together. The Sherpa 50 is light weight and is made for use with USB devices.
9) On/Off Toggle Switch - You can use standard Linux commands to reboot or shutdown the Pi. However, there is no physical on/off switch on the board. To kill all power to the board you must pull the USB cable out. We wired in a toggle switch so that we could kill all power to the board on the outside of the enclosure.
10) Momentary Pushbutton Switch - We wanted a way to let users reboot or shutdown the device without using something like ssh to shell into the device and running software commands. We wired up a push button that is monitored for 2 types of events. If the button is held for more that 2 seconds and less than 5 this triggers a reboot. If the button is held for 5 seconds then a shutdown is triggered. This is the proper way to shutdown the device. Once shutdown the toggle switch can be used to remove all power. Reapplying power (flipping the toggle switch on again) will automatically restart the Pi. You can visually tell that either a reboot or shutdown command has been registered by looking at the LEDs on the outside of the enclosure. A 3 second hold which triggers a reboot blinks the green LED 3 times. A 5 second hold which triggers a shutdown blinks the red LED 5 times.
11) LEDs - 1 red and 1 green. Any type will do.
12) Passive infrared (PIR) sensor
13) PING Ultradistance Sensor
14) Reading Glasses - A 3.25+ strength lens worked best for us.
15) Photo cell (CdS photoresistor)
Other items likely needed to build it yourself:
1) Soldering Station
2) Lead Free Solder
3) Lead Free Tip Tinner
4) Magnifying Glass with Alligator Clips
5) Electrical Tape
6) Wire, Wire Cutters, and Wire Strippers
7) Drill / Dremel
8) Phillips Screwdriver
9) Gorilla Glue (epoxy)
10) Safety Goggles
11) Two bolts (with matching washers and nuts) to hold your enclosure in place
Step 2: Prototyping
The Raspberry Pi loads its operating system from an SD card you insert directly into the board. We chose to use the Raspbian distro. You must first learn how to get Raspbian on to an SD card unless you purchased a kit that provided a preloaded SD card. There are good instructions for this process here.
Having a full Linux distro on the Raspberry Pi is one of the benefits to selecting this board. You can plug the Pi right into an existing monitor, mouse, and keyboard then apply power and away you go. With Model B you can simply plug in an ethernet cable and you’re on the Internet. Model A requires a bit more effort since it does not come with an ethernet port. You’ll need to configure WiFi before you can connect to the Internet. This is typically done via a 3rd party USB adapter. Of course on a model A you only get one USB port so you’ll either need a powered USB hub to connect up other devices like a mouse and keyboard or you’ll have to move into a headless setup. Luckily this is also fairly simple. Raspbian runs ssh which means you can remotely shell into the Pi from a terminal program running on another computer. If you want to access the desktop of the Pi rather than just the shell you can use VNC for that.
Creating a hardware hack most times involves external sensors of some sort to read inputs from the real world. Things like temperate, light, motion, force, distance, etc. The Raspberry Pi board comes with a series of General Purpose Input/Out (GPIO) pins. These pins are configured via software and can be used to read input from sensors and/or to control output to sensors like an LED. Some pins have a specific purpose while others are more generic (you can also reconfigure them to suit your needs). The most common way developers interact with the GPIO channels on a Raspberry Pi is via a Python library called RPi.GPIO. You can find some good tutorials here.
After we completed our initial prototyping we created a sketch of the Raspberry Pi GPIO to external sensor wiring. Most good software projects start with architectural diagrams. Interactive hardware projects can benefit from diagrams in a similar manner and luckily there is a great initiative known as Fritzing to help with this. Fritzing is an “ecosystem that allows users to document their prototypes, share them with others, teach electronics in a classroom, and layout and manufacture professional pcbs.”
This tutorial would get insanely large if we tried to detail out each wire/connection. Instead we are sharing the sketch. As the project evolved so did the sketch. Above is an image of the final sketch. The sketch can also be found in the code repository with the rest of the code.
Step 3: Construction
With prototyping & component layout nailed building can begin. For us the actual time required to execute our first prototype build for this project was roughly 25 hours. We’ve condensed that down into a 4.5 minute video which you can see below:
Step 4: Challenges, Improvements, and Lessons Learned
The feeder we through about 3 months of testing and refinement along the way to the final version. Below you'll find information about what changed and why.
Our initial build actually used this Weatherproof TTL JPEG camera. After a few days of operation we determined that the internal motion detector on the Weatherproof TTL JPEG camera was too sensitive to changes in light. Changes in light levels commonly see during sunrise, sunset, cloud cover, etc triggered many false positives. We decided to disable the camera’s motion detection and replace it with a passive infrared (PIR) sensor. PIR sensors measure “warm body” heat and are often seen in home security systems.
During this period we also began to experiment with tilting the solar panels. The initial solar setup actually used the Nomad 13 instead of the Boulder 15. The Nomad 13 worked ok, but what we discovered was that the Nomad 13 was not generating a charge unless both panels were exposed to light at the same time. Therefore, the feeder had more downtime that we wanted. As you’ve probably seen on the roof tops of houses across the country fixed solar panels are typically angled and pointed in a certain direction. The direction and the tilt help optimize the amount of energy the panels generate. The direction of the panels and the proper angle depends on your specific location. There are a number of online calculators that can help you find an optimal configuration.
While the PIR sensor was a good workaround for the light sensitivity problems identified with the camera's internal motion detector it had its own weakness. The PIR sensor was overly sensitive to changes in heat. Abrupt changes in heat aren’t typically a problem in a closed setup like a house, but in the outdoors anything goes. The same sunrise, sunset and cloud cover conditions posed a problem for this sensor as well. We had roughly the same number of false positives. So with version 1.2 we dropped the PIR sensor and replaced it with a new PING ultradistance sensor. In a nutshell this sensor uses sonar to measure the distance to a specific target. The sensor looks like a set of eyes. One sensor sends out an ultrasonic pulse and the other sensor measures the amount of time it takes to bounce back. Apply some calculations that involve the speed of sound and you can arrive at a distance calculation.
If you look closely at the picture of our feeder you can see a small wall opposite the camera. That wall becomes important because we are bouncing the ultrasonic pulse off that wall. We know from testing just how far that wall is. A reading less that the distance to the wall indicates something is in the way (i.e.) a bird. As you can see in the picture above the sensor is enclosed in a plastic container. The sensor is not waterproof so we fashioned a custom holder that minimizes exposure to the elements. We aren’t claiming this setup is 100% full-proof, but it has survived countless Colorado thunderstorms without issue thus far.
Shortly after launch of version 1.2 we noticed the PING sensor was overly sensitive to sudden changes in wind. Wind gusts triggered false positives because they interrupted the sonar reading. We helped tame this by placing a wind screen (foam) over the receiver. Another issue with the sensor was that occasionally it would send back a short reading. It was unpredictable and more often than acceptable. The solve for this was to take a 2nd reading after a short reading was registered. While it randomly sends a short reading it never sends 2 in a row.
As you can tell by now the most problematic feature of the device is motion detection. At this point we came up with a new solution that turned out to be the winner. We re-enabled the camera’s internal motion detector and coded up a software feature we call "phone a friend" (yes that is borrowed from "Who wants to be a Millionaire"). The idea is simple. One sensor is constantly checking for motion. When it detects motion it asks the 2nd sensor for verification. If the 2nd sensor confirms motion then a picture is taken. Otherwise, the event is treated as a false positive.
This version revisited the solar panel issues mentioned earlier. We felt the Nomad 13, even with directionally placed and properly angles panels, fell short of our expectations for device uptime. Goal Zero states that the Nomad 13 can charge the Sherpa 50 in 8 to 16 hours of sun. Version 1.4 of the Feeder Tweeter required about 450 to 500mA of power and the Sherpa 50 provides 50 watts. Our original goal was to keep the device online 27/4. A fully charged Sherpa 50 would be enough to keep things running throughout the night. However, real life testing proved that we could not get a full charge each day given the available sunlight in our location (Colorado).
We decided to upgrade to the Boulder 15 solar panel. This panel collects 15 watts rather than the 13 watts of the Nomad 13. The first obstacle with this panel was mounting. The Nomad 13 was a trapper keeper style setup that folded nicely over the roof. The Boulder 15 panel mount is more like a hanging picture frame which obviously is not optimal for our setup. We created a custom mount similar to the setup you might see on solar powered construction signs. We purchased a wood pole, drilled a hole in the top for some threaded steel, twisted on a tripod head on top, and drove it into the ground with a Tiki Torch stake. On the back of the Boulder 15 we crafted a basic "I" bracket that holds the top portion of the tripod. The tripod head is capable of holding roughly 6 pounds which is more than the weight of the panel. The tripod head is a simple ball joint that gives us the ability to angle the panel in any position and lock it in place.
With these release we also decided to build a custom Debian package for our software. I will not detail out the process for doing that here. You can find plenty of tutorials online. The benefit of doing this is that we can now add our own custom Debian package repository to each new feeder we build and have it check that that repository occasionally for new versions of the software. Meaning we can deploy updates to feeders in the wild much like get on your mobile phone. When a new version of the software is detected the feeder auto-installs the new package and reboots.
The biggest problem remaining with version 1.4 was the quality of the pictures. The best picture our original camera was capable of was a 640×480 pre-compressed JPEG. In the beginning, we thought this would be plenty for a simple bird picture. However, in the end we not satisfied with this output. Too much work went into the device to leave it with a crappy picture. We decided to swap out the original camera for the official Raspberry Pi camera module. This camera is capable of capturing a 5 megapixel 2592×1944 image and is similar to one you might find in a mobile phone.
The new camera came with a few drawbacks. First of all it is not waterproof. After some research we found a plastic camera mount that was pretty well sealed. We finished the job with some of our silicon glue. The second issue was that the flex cable shipped with the module is very short. Too short to reach from inside our enclosure to the area we needed to mount the camera on the outside of the feeder. It doesn’t appear to be all that weather resistant either. We eventually found a nice camera extension kit that allowed us to extend the camera’s cable. The new ribbon cable was better suited for the outdoors too which was a great bonus. The final problem with the new camera was the focus. The Raspberry Pi camera is a fixed focus module. Anything beyond say 50cm is in focus. Anything closer is blurry and gets worse the closer you get. In our feeder setup we only have about 7 inches (18cm) between the area we mount the camera and the area birds tend to sit. We adjusted the focus of the camera by purchasing a set of reading glasses and borrowing a lens. We experimented with various strengths, but ultimately settled on 3.25+. We removed one lens from the frame and glued it directly over the camera lens on the outside of the camera mount. 3.25+ turned out to be the perfect adjustment level for the distance we had. Problem solved, clear pictures a plenty. Check out this before and after shot for comparison.
Swapping the camera also came with a few other side effects. We had to rewrite all the camera code. This could be a disadvantage or an advantage depending on how you look at it. The weatherproof TTL serial camera required us to interact with it via serial communication. Not super complex, but it does take a fair amount of code. Luckily Adafruit actually created a library to make this pretty easy with an Arduino that someone later ported over to Python for the Raspberry Pi. The only missing part of the port was code to handle motion detection so we coded that part up ourselves. The Raspberry Pi camera module does not have an internal motion detection mechanism so a new solution was required for this. Serial communication is also not something we have to worry about anymore as the new camera module comes with a couple applications that you use to interact with it; raspistill and raspivid. Raspivid is a command line application that allows you to capture video with the camera module, while the application raspistill allows you to capture images. With this new setup we tossed out a bunch of code which actually made things much simpler in the end.
Back to motion detection, as I mentioned the new camera does not have this capability. Being that the "phone a friend" featured worked so well, and that we still had a few other sensors that we had experimented with, we decided to bring back the PIR sensor. So in the current version the PING and PIR sensors work together to confirm motion. In the picture at the top of this section you’ll see a stack of components. We call this the "tower of power". At the bottom is the PING sensor, in the middle there is the PIR sensor and a new photocell sensor, and at the top is the camera module with the extra lens. The original camera had an LED ring that would come on in the dark and make it capable capturing black and white pictures in low light situations. In the few weeks the old camera was operational we only captured one night picture so this is not a super important feature. However, the removal of it did present one new challenge. If motioned was detected at night or in a heavily overcast situation the camera would capture a very dark or perhaps even a pitch black image. To solve or this we added a photocell which helps measure light levels. We did some testing and came up with what we felt were acceptable light levels for good pictures and added this into the mix. Now a picture is only captured when we have double confirmed motion and enough light.
Step 5: In Closing
What did we learn? More than we expected and probably enough for another blog all by itself. Lets just say this is probably the biggest misuse of Manifold tech resources to date and we wouldn’t change a thing. This is the stuff we live for. Our lawyers said we should also mentioned that more than a couple cases of beer were harmed during the making of this bird feeder. Building your own might pose a health risk. Then again you might also find it an exhilarating learning experience like we did.
For more information please see the project's companion site - http://www.feedertweeter.net/.