Introduction: The SOMA Project

The SOMA Project was a senior design project created by six university students who love spending all night in lab building robots. A fully-autonomous swarm was designed and built to be used as a platform for future swarm applications. Four fully autonomous robots were constructed, each capable of maintaining the relative and absolute positions of every other robot within its field of vision. In addition to tracking each others' relative positions, the robots sense and record the positions of obstacles, and share this information throughout the swarm. A dynamic map is maintained by each robot and transmitted to a passive monitoring station, where the map can be viewed in real time. This Instructable covers how the four robots built for this purpose were made. Specificallly, it details how the iRobot Create was used as a base for this project and how the rest of the system was built atop it.

There have been many attempts to create a robotic swarm, however, before the SOMA Project, creating an inexpensive and scalable full-featured swarm had not yet been achieved. Each of the robots we made costs less than one thousand dollars, has space for hardware expansion, and is designed for scalability. The minimum functionality of the swarm we set out to make was to build a map of obstacles in an environment and position themselves in a map. The ability for each robot to know where it has been and know where its going allows for further study in mobile and ad-hoc networking, complex searching algorithms, and search and rescue applications.

The Warning:
This project is quite complicated, so it should only be attempted if you are already familiar with assembling and debugging electronics. You will need access to a full computer engineering lab with all standard assembly and test equipment as well as substantial mechanical equipment: a machine shop and a laser cutter. Since this is a difficult project, it will be assumed that the reader is experienced working with electronics and machining equipment. As much detail as possible will be covered, but the very basics, like how to solder and how to keep all your fingers when working with a laser cutter and lathe, will not be covered in this Instructable.

We hope that anyone who attempts to build these robots has as much fun as we did.

-The SOMA Team

More information is located at http://www.thesomaproject.net

Step 1: Assembling the Team

This first step, though technically optional, is highly recommended. A project like this always goes a little easier when you've got good company and someone else to help you yell at the robots when they're misbehaving. Your players may be different and your lab may be cleaner, but you'll probably need a group like this. This is the story of the SOMA Project Team.

A long time ago, in a computer engineering lab far, far away, six computer engineers banded together to tackle what we were told was the impossible: a robot swarm. In January of the year 2007, each eventual teammate surfaced from the depths of the Jack Baskin School of Engineering atop the UC Santa Cruz campus, each bringing a crucial skill to what would become the SOMA Project. John and Erik brought the crazy, but two very different kinds: John the slow and methodical crazy that results in tiny 0402 components scattered across a board that would be soldered by hand, and Erik the instant-crazy that makes circuit boards round and espresso machines tremble. Thom came next, better than the CS kids in front of a keyboard and faster than the autoset button on an oscilloscope. Sean arrived, everyone's source of neverending entertainment and the only one brave enough take on the RF communication--of whatever project we would come to choose. Andrew, the SolidWorks pro and lover of laser-cut acrylic fumes, agreed to model, remodel, and cut whatever it would take to make some robots. Finally: Rachel... the fearless leader, with more crazy than John and Erik combined, braver than Sean, and the only engineer in the entire school that would agree to be the team leader of this unruly bunch. They knew they wanted to do something with robotics, but didn't know exactly what their goal would be, or just how many robots they were going to make... After whiteboards had been filled with scribbles, sketches, and countless question marks, the team decided on making a Swarm of Mapping Automatons, and the SOMA Project was born.

Step 2: Theory of Operation

All of the robots in the swarm are physically and behaviorally identical. Each robot is capable of tracking its own motion, detecting and avoiding obstacles, and communicating information about its surroundings to other robots. The relative position of each robot is determined through a combination of ultrasonic and infrared transmissions. As a group, the robots are capable of creating, joining, and handling a robot leaving a network. The robots drive while collecting and sharing map data. The robots recalculate the relative position of other robots regularly to reduce mapping error. The map that the robots build takes the form of an occupancy grid, which is built by ultrasonic ping sensor data and encoder data. Collected map data is shared with other robots and is used by each robot to determine its own path.

Each robot can be seen as a system consisting of three three blocks: Map Building, Positioning, and the Control Interface.

Map Building
Each robot gathers obstacle data using an ultrasonic ping sensor mounted on a servo motor. The motor allows the ping sensor to gather obstacle data for 180 degrees in front of the robot. Once a reading is made, the data is stored in an occupancy grid: a data structure showing the probability of an obstacle being in space. Each cell of the grid (or pixel, in our implementation) shows the likelyhood of an obstacle being in space. As robots cover the same space, if they agree on an obstacle being in the same location the are turns darker, if they agree that the space is clear, it turns lighter.

The map dynamically grows as more space is explored. The data structure is designed to be paged on and off of an SD card. Each page contains links to nearby pages and new pages are added as space is needed. The local area is stored in an external SRAM and recently unused pages are written to an SD card on each robot.

Positioning
The positioning system keeps track of absolute and relative position of each of the robots in the swarm. Absolute positioning uses the encoders from the iRobot Create and has board space ready to attach a digital compass and gyro for increased accuracy. Relative position is determined by measuring the time-of-flight (ToF) of ultrasounds emitted by each robot. By recording the ToF to three different points on every robot receiving the sound, each robot can triangulate the position of the robot emitting light and sound. The robot announcing its position simultaneously emits infrared light and ultrasounds; the infrared is received essentially instantaneously by every other robot and the ultrasounds travel much slower, so the distance and direction of the source can easily be determined.

Control Interface
The control interface oversees the operation of all the systems on each robot and is the main artificial intelligence of the robot. The control interface is also responsible for the maintenance of the swarm network and all RF communication. The artificial intelligence is responsible for creating the swarm as robots appear and maintaining the network as a map is built. The rest of the AI is devoted to obstacle avoidance and can be adapted to optimize coverage of a space.

Complete information on the design and construction of the entire project can be found in our Final Report.

Step 3: Bill of Materials - Hardware

There are four files attached. These provide the electronics bill of materials (BoM), a board house ready zip file, and all the schematics and layouts used in this project. hardwarebom is the bill of materials. layout.zip are the five files needed to send into a board house to have the circuit boards printed. Layouts.zip contains all the layout files and a few of the schematics. Schematics.zip contains the rest of the schematics used in the project. You will need both the schematics and the layouts to help with assembling the boards.

Some other materials necessary for the creation of the robots include a soldering iron, solder, wire, a small knife, flux, tweezers, a microscope, a multimeter, an oscilloscope, a signal generator, a 40 ohm 1W power resistor for testing purposes, and an ISP Programmer for Atmel microcontrollers. It is possible to build these robots without some of this equipment, but much, much more difficult.

Step 4: Bill of Materials - Mechanical

Attached is the mechanical bill of materials. For this project, we had access to a 30W laser cutter, able to give us precise cuts through 1/4"acrylic. Most of the mechanical parts were ordered through McMaster-Carr. Attached is the mechanical parts list.

Also attached is the complete SolidWorks folder with details of every part cut out using a laser cutter.

Step 5: The Circuit Boards - Intro

The next series of steps are going to cover the functionality and assembly of the individual boards. This will include a description of the circuits and a suggested assembly order for the parts on each board, as well as methods for testing each board as you put them together. Since this is a relatively difficult project we will assume that the reader has some familiarity with soldering and testing equipment, such as multimeters, power supplies, and oscilloscopes.

Since many of these boards have some small parts (many are surfacemount) you will want a good soldering iron with an un-abused tip to make things easier. We used a decent benchtop Weller for the big stuff, and a Metcal under a soldering station microscope for the small stuff. It is possible to do it all without the microscope, but it will make your life much easier if you have one, and you'll save your eyes.

Step 6: The Circuit Boards: Power

The power board does what its name implies. It takes a 12-17V input and provides regulated 6V power to the rest of the boards. The reason for having a power board is it allows us to use a switching regulator, which can achieve over 90% efficiency. A normal linear regulator can only achieve around 65% efficiency. Since switching regulators don't provide as stable a voltage, we go down to 6V on the power board and then use a linear regulator to drop it again to 5V when it reaches the other boards.

The power board can not easily be tested as it is built, so you will need to stuff (solder the parts on) all at once. To test the power board, place the 40 Ohm power resistor across one of the 6V outputs of the power board to give it a load and give it 12-17V input. Most benchtop power supply should work for this, we'll discuss later how to use the iRobot Create(TM) battery to supply power to the additional boards. Use a multimeter to test the output, which should be within 0.2V of 6V.

The included pictures are for reference when assembling the board. The picture of a completed board is to give you an idea of what it should look like finished. You'll notice we had to use some 3pin headers in a couple of the two pin header slots. This is because we ran out of parts and didn't have time to order more. Also note that there are two output ports which are not needed for this project but are there for expansions.

The layout and schematic are what you'll need most to assemble the board. I've labeled all the parts on the layout to make it easier to figure out what goes where. Use the schematic for reference if you're uncertain about a part.

Step 7: The Circuit Boards: IR

The Infrared (IR) Board is simply an emitter and a receiver for infrared signals. It receives 6V power, unregulated power, and ground to power the components. The two smaller plugs are the input and the output for the board. The input is a signal generated by the Localization Board that controls the output of the IR LEDs. The output is an active high signal that is sent to the Time-of-Flight Board when an IR signal is received.

The first step in putting together the Infrared Board is to attach the power regulator, the capacitors for filtering power, and the power led and the resistor that goes with it. See the notes on the image for a reference of the location of these parts. The layout image has all the parts labeled.

Once the regulator and power led are in place go ahead and hook up a 6V power supply and ground to the board (the middle pin on the DF5 connector is the 6V input). The LED should light up. Also check the board with a multimeter to ensure you're getting 5V from the regulator.

Next, you'll want to connect the outer ring of IR LEDs, the 82 ohm resistors, the TIP122, the 4.7k Resistor, and the DF3 header that carries the transmission signal. Do a continuity check before you plug anything in, since these parts are going to draw a decent amount of current.

Once you're sure everything is soldered on correctly hook up the 12-17V supply and ground. You'll also need a signal generator to test this, so if you don't have one wait until the localization board is working to test this part. To test, set your signal generator to create a 36.7kHz 0-5V square wave in bursts of 11, 10 times a second. Attach the signal generator outputs to the transmission signal input on the board. Use an oscilloscope to check the voltage across the 82 ohm resistors. You should see the same signal being dropped across the resistors.

Now solder on the NAND gate chip, the other DF3 header, and the six IR Receivers (the PNA4601 parts). A basic test is to use the exact same setup as before and check the output signal. If it goes high on about the fourth edge of the 11 pulses and remains high until shortly after the 11 pulses end it's working. Once you have a second board assembled you can test distances by setting one board up to transmit and checking the output signal on the receiving board.

Step 8: The Circuit Boards: Time-of-Flight

The Time-of-Flight (ToF) Board is divided into two separate sections: an analog half and a digital half. The analog section receives very weak signals from the transducer. Each receiver's signal goes through two gain stages where the signal is amplified and centered between 0V and 5V. Next, each signal goes through another amplification stage for a total of 89dB, and rectified by the op-amp (which also smooths the signal when the transistors recover from saturation and cutoff). This much gain often results in the signal hitting the power rails--but that's OK--since it's only the arrival time, not the actual waveform that matters. The analog signal enters the digital domain when it passes through a comparator to turn the amplified and rectified signal into a clean 0-5V edge.

The digital half of the board itself has two sections: the microcontrollers and the completely separate transmitting circuitry. After the signal crosses over to the digital side of the board from the comparator, it is combined with the IR-on signal from the IR board through an OR gate where it then goes to the input capture on the microcontrollers. The microcontrollers gather data and send it back to the Localization Board through an SPI bus.

The transmit circuitry is very simple. The transducers must be driven at their resonant frequency of 24kHz for a short burst-this square wave is provided by the Localization Board. The signal is split once recieved by the ToF Board: one branch is buffered (by two sequential inverters) and the other is inverted; both then drive an RS-232 level shifter. By shifting one of the signals 180 degrees and level shifting them, the transducer is driven by +/-10V, so it sees a 20Vpp signal instead of just 5Vpp.

The ToF Board is one of the most difficult to assemble. There are a lot of small parts and it's the only board that has a serious analog portion with almost 90dB of gain, which means it's more susceptible to poor solder joints and other problems. If you don't have a soldering microscope, tweezers, and a good soldering iron for this board you're in for a lot of pain (you're in for a lot of pain with them, too, just not as much). On this board, perhaps more than any other, frequent testing is worth the hassle and will save you a headache down the road.

The half of the board with the copper pour on top is the analog half. Start assembly by soldering down the voltage regulator, associated capacitors, and the power LED and current limiting resistor. Continue assembly with the LMV358 dual op-amps and the surrounding resistors and capacitors. There are lots of tiny Rs and Cs--be careful! I found that it was easiest to solder down all the components of a specific value at a time then move to another component value. Once the first two gain stages (two stages and one channel per op-amp package) are verified to amplify and center the signal between 0V and Vcc, solder down the LMC6484 quad op-amp and surrounding Rs and Cs. Verify that this stage provides further amplification and rectifies the signal. The analog side of the board is completed by soldering down the LM339 comparator and again, all the Rs and Cs needed. Make sure to solder down all the components on the bottom side of the board and the connectors for the transducers.

The other half of the board, without the copper pour on the top, is the digital half of the board. Again, start by soldering down the voltage regulator and the capacitors that go with it. Once the digital half of the board is powered, solder down the 74HC32 quad OR-gate package and the connector to the IR board, and verify that the analog signals are successfully converted to the digial logic levels. Next, solder down the 74HC04 hex inverter and the MAX232 level shifter; verify that the 0-5V input to the ToF board is successfully shifted to +/-10V to drive the ultrasonic transmitter. All that remains are the clock and the three ATTiny44 microcontrollers. Start with the clock, making sure all pads have strong electrical connections. Solder down the three ATtiny44 microcontrollers. Before you can verify that the microcontrollers are programmable, you must attach all the headers, the pull-up resistor on the reset, and the big red reset button. Once the headers are all attached, erase and set the fuses on the ATtiny44 microcontrollers.

Step 9: The Circuit Boards: Control

The control board is responsible for the high level operation of the robot and for building the map. It communicates with the other robots using a Wi.232 wireless module. It communicates with the Localization Board over an SPI port. There is also a UART 10 pin connector for debugging on the PC. The board controls the ping sensor and the servo it's mounted on. Using data collected from the ping sensor and the Localization Board the Control Board builds a map and decides where it should go next.

As before, start by soldering on the regulators, associated caps, and the power LED. Soldering down all three regulators at the same time is fine, but if you want to be extra careful you can test them individually. Once they're down, use a multimeter to make sure their outputs work correctly.

The next step is to solder down the ATMega1281, the SRAM, and all the parts that go with those (the clock, caps, reset circuit, SRAM, etc.). Test the clock signal with an oscilloscope and then make sure the microcontroller can be programmed (see the programming section for how to do this).

Finally solder on everything else. Test for shorts to make sure nothing will blow up and this board is finished until the programming and debugging phase.

Step 10: The Circuit Boards: Localization Board

The Localization Board itself is relatively simple to assemble, but it handles a lot of things. The Localization Board is responsible for gathering data from many of the sensors, including sensors on the iRobot Create. The Localization Board also sends instructions to the Create using one of its USARTs. The main task of the Localization Board is to keep track of the robots' current position as well as the position of other robots.

The Localization Board works by controlling the Infrared and Time-of-Flight Boards, which are used to calculate the relative position of the different robots. The Localization Board generates a 24kHz signal for the ToF Board and a 36.7kHz signal for the IR Board. The signals last for about a dozen cycles, and are only emitted by one robot at a time. The other robots receive those signals, and because light travels around six orders of magnitude faster than sound, the ToF Board is able to time the difference between an IR signal arriving and an ultrasonic signal arriving at three different points on the robot. For a more in depth discussion of this method see the formal report (which includes pretty equations for how to calculate the distance and position of transmitting robots).

Another thing to note is that the Localization Board has space for a digital compass and a gyroscope. These were designed for and would have allowed more precise positioning, but due to time constraints were left off the final version.

To assemble the Localization Board, start with the 5V regulator, surrounding caps, and the power LED parts. Test the output of the regulator as in previous steps by making sure the LED comes on and that the output remains steady at 5V.

Once this is working go ahead and assemble the rest of the board (minus the compass and gyro). Make sure there are no shorts before turning on the power, and then test the board by attempting to program the microcontroller. You're now ready for the next step.

Step 11: Taking Apart the IRobot Create (or... ReCreating the Create)

The iRobot Create is the base of every robot in the SOMA project. We have added a shelving system of acrylic layers on which the digital and analog hardware and sensors of a SOMA robot reside.

Numerous modifications were made to the Create in order to adapt it to our needs. First, most of the plastic chassis was removed except for the bump sensor. This provided us with direct access to the power supply, as well as conveniently placed mounting holes. Second, the three attached wheels were fixed in a dropped position using acrylic to provide additional clearance to allow the robot to travel over a wider range of surfaces. The three button PCB was also mounted underneath the bottom acrylic layer in order to provide an easy way to turn the Create on. Lastly, a caster wheel was added to the rear of the robot for added stability.

To begin, we had to take apart the iRobot Create. To do this, unscrew every screw on the underside of the Create. This allows for removal of the top white covering. Remove the entire chassis except for the bump sensor. Even the buttons need to be removed (which requires you to unplug them from one of the Create's boards. It will be added later on the underside of the robot.

The two drive wheels of the Create needed to be forced downward. This allows the robot to traverse a variety of terrain. This is done using the two wheel support pieces. Also, the front wheel needs to be kept in a downward position, using the front wheel blocker piece. This will require the removal of the front bumper, but it can be put back on.

Lastly, the power from the iRobot Create needs to be tapped, and fed into the power board. The image below shows how we did it. The battery is fed into the iRobot's main board, the one that the power and play push buttons are plugged into.

Step 12: Machining the Cones

In order to turn a directional transducer into an omni-directional transducer we added cones. The purpose of the cone is to reflect emitted sound out in all directions as well as incoming sounds down into the transducer. Four cones are needed for each robot, one for the emitter and three for the receivers. This step is a suggestion on an efficient way to make these cones, which took us a little time to figure out.

The first step is to make a holder for the cones. Since they have to be flush against the layer they're attached to it would not be possible to hold them in a lathe normally to make the part. This shouldn't be difficult if you have experience with a lathe. The part we used was cut from a 2" diameter aluminum cylinder, roughly 3 inches long. The only dimensions that are really important is the size and threading of the screw piece, which should be at least 1/4" long (though significantly less than 1/2", since you can't tap the entire 1/2" of the cones hole). The size should be 3/8" with 16 threads per inch (3/8"-16 for those of you who speak screw sizes).

Once you have a holder the cones will be much easier to make. The first step is to start with a piece of PVC 2" in diameter and at least 2" long. Place the piece in the lathe and face off one side (make the side flat and perpendicular to the cylinder). Then, drill a hole for a 3/8" tap. This means one size smaller than 3/8", or the tap won't work. Use a 3/8"-16 tap to, well, tap the hole (a tap adds grooves to a hole so that things can be screwed into it).

Now remove the cylinder from the lathe and attach it to the cone holder. Put the cone holder into the lathe so that you can work on the PVC cylinder without worrying about running into the chuck. Finally, use the angled cutting tool on the lathe to cut a 45 degree cone. Ideally, you want the slope to be as smooth as possible and end right at the 2" diameter edge of the cone, so that it looks like the mechanical drawing. If it ends up being a little smaller than 2" that's fine, too.

Step 13: Cutting Out the Layers

There's not a lot of explanation that can go into this. We cut all the acrylic parts we used on our robots out using a laser cutter. If you have a laser cutter then it's very simple to do, simply use the included SolidWorks files, use them to make a mechanical drawing, and then use the laser cutter to 'print' them onto some 1/4" acrylic. If you don't have a laser cutter then you'll want to make a trip to your nearest machine shop where you'll spend a very large amount of time with a mill. Whatever you do, make sure to check the units before you print anything. SolidWorks will tell you whether you're in metric or imperial units.

Step 14: Building the Layers - Expansion

There are 5 layers in this project, each made from 1/4" acrylic. The bottom most is the expansion layer, and ours contained nothing. It was made for future expansions. It does however act as the primary structural base for all of the other layers. The SolidWorks model shows how this based is attached. For this project, we used crazy glue to attach acrylic to plastic, and acrylic cement to attach acrylic to acrylic.

All of the screws used for the bases were 6-32. We bought an assortment of sizes, as the bottom layer needed to be screwed to the iRobot Create, which required 3/4" of clearance, while others were only used for board placement, and required 1/2" clearance.

Attached to the underside of the expansion layer is the PCB with the old pushbuttons. This proved to be the most accessible location for our layer design. Also, a caster wheel is added to the underside of the expansion layer for further stability. This requires 4 6-32 screws. Continue adding nuts as spacers until the addition of the caster wheel makes the robot stable.

A wall is placed on the expansion layer, as it provides some support for the next layer.

Support for the expansion layer comes from the two side walls. These are puzzle pieced to provide access to the Creates plastic side. An acrylic "hook" piece is made, with holes in it to screw directly into the Create. The wall in the back is also puzzle pieced, and glued into the expansion layer.

Step 15: Building the Layers - the Rest

The 2nd layer is another expansion layer. However, some wires need to be routed from it to the 3rd layer. The spacing between layers is two inches, so the two inch standoffs are needed. Brass tubing was used for wire routing. Since the second layer is the first to have standoffs, screws are placed on the bottom of the standoffs. For every screw used in this project, we accompanied it with external tooth washers for stability.

The third layer is the digital layer. Small standoffs were used to hold the three boards (power, control, and localization) used on this layer. A red switch was used for power to the power module.

The third layer is also the home of the servo and mounted ping sensor. The ping sensor is mounted using a custom built bracket, and crazy glued onto the servo horn.

This layer does not have wire routers however, as the wires coming from the expansion layer below it needed to be routed to it, and wires from this layer need to travel upwards.

The fourth layer is home of the time of flight board and the ultrasonic transducers. The time of flight board hangs upside down for short distance wires to the localization board. The ultrasonic transducers fit snugly into their cut out portions of the acrylic, and are made that way so they align perfectly with their corresponding cones above.

This layer has three standoffs to the next layer, as well as wire routers. The thickest wire is the coaxial cable to the antenna on top. If you purchase a coaxial cable too thick, a larger wire routing hole may be necessary.

The fifth and final layer is the home of the cones and the IR board. The cones are bolted using thick Zinc-Plated Low-Strength Hex Head Cap Screw 3/8"-16 Thread, 1/2" Length. The IR board sits on top, with a hole going through the center of it. This is so the antenna's coax cable can fit through.

All of the SolidWorks files give exact dimensions of everything cut out.

Step 16: Connectors and Wiring

There are two main connections that need to be made to the iRobot Create. The first is power, which can be extracted from the iRobot battery power. This means that the iRobot must be on before turning on our system. The other connection is to the iRobot for sending driving commands and receiving odometry data.

Power is connected to a power switch and then to the power board of our system. From there, the power board delivers the correct voltage to each circuit board. The power board connects to all four of the other boards using DF5 connectors. The Time-of-Flight, Control, and Localization boards all get the regulated power from the power board and the IR Board also gets unregulated power for the IR emitters.

The Control Board connects to the Localization Board, the ping sensor, the servo, and the antenna. It uses a 6-wire ribbon cable for the SPI connection to the Localization Board. It used a 7-pin CST-100 connector for the ping sensor. For the antenna we used an SMA extension cable to run the antenna to the top of the robot. For the servo we used the connector that was provided on the Parallax Standard servo.

The Localization Board connects to the Create with a connector that we wired. On the Localization side we used a 4-pin DF3 connector and on the iRobot side a Mini Din connector. The Localization Board also connects to the Ultrasonic Board using a 10-pin ribbon cable connector for all the Time-of-Flight/Localization connections. This includes the SPI connection and 24kHz signal. It also connects to the IR board a 26.7kHz signal using a 2-pin DF3 connector.

The Ultrasonic Board connects to the IR Board and all the transducers using the 2-pin DF3 connectors.

Step 17: Programming the Boards - Setup

There are three boards that need to be programmed. The control and localization boards are home to Atmel ATmega1281 microcontrollers. The time of flight board is home to three Atmel ATtiny44 microcontrollers.

Before programming the microcontrollers, the correct fuses must be set. Be careful setting these and make sure you're powering the boards off a power supply when you do so. If the wrong fuses are set or the voltage drops while they're being set it can make the microcontroller unusable. We broke at least one microcontroller by accidentally setting the wrong fuses.

All Microcontroller Lock Bits:
-Mode 1: No memory lock features enabled
-App Protection Mode 1
-Boot loader protection mode 1

ATmega1281 Fuses:
-Brown-out detection disabled
-JTAG interface enabled
-Bootflash section size = 4096 words
-Bootstart address = $F000
-Ext. Clock; Start-up time ... + 65 ms

ATtiny44 Fuses:
-Brown-out detection disabled
-Ext. Clock; Start-up time ... + 65 ms

When programming the control or localization board, the SPI can not be plugged in.

Also, when programming the localization board, the Create must be disconnected.

The ToF board must be disconnected from the IR board and the the localization board during programming. There are some other tricks to the ToF board, though; read on.

Since the three ToF microcontrollers communicate to each other through the SPI bus that is also used to program them, only one can be connected to the programmer at a time. The clock on the SPI bus is connected to each microcontroller through a standalone jumper. Once the ToF board is completely programmed, all three jumpers need to be in place; when one of the microcontrollers is being programmed, it should have the only clock connection on the board.

When completed, all the ToF microcontrollers will be slave devices on the localization board's SPI bus. For testing and debugging purposes however, one of the microcontroller's MISO and MOSI lines are connected to the bus through jumpers, so (when disconnected from the localization board, of course) that microcontroller can be a master on the SPI bus and the other two slaves. The two configurations are shown below. Remember, on the ToF Board, a jumper's long side always runs parallel to the long side of the ToF Board.

Step 18: Programming the Boards - Programming

To program, simply plug in the programmer and use the AVR Studio software to compile the desired project and program the flash in the appropriate microcontroller. src.zip contains the entire source directory for this project.

The Control Board uses the command.aps project which has three main features: communication, motion control and token passing. It starts by first creating or joining a network and then starts the main state machine for motion control and token passing. If the robot is declared as the leader then it will take it's turn to explore. This means that it will drive and avoid obstacles and stop and do a 180 degree sweep of the area and send it's location as well as the location of obstacles around it to a PC to create a map. It then passed leadership to another robot in the swarm and the process starts over again.

The Localization board uses the localize.aps project which contains the functions that talk to the iRobot and to the Ultrasonic Board. It receives odometry readings from the iRobot and localization readings from the Ultrasonic Board. It also sends the commands to the iRobot that drive the motors. Also, the Localization Board generates the 24kHz and 36.7kHz signals that drive the IR and ToF transmissions.

Each of the microcontollers on the ToF board use the tof.aps project and is functionally identical. The purpose of the ToF board is straight forward: by timing the difference in arrival time of simultaneously emitted infrared light and ultrasonic sound, you can tell how far away the emitting source is. By recording the time-of-flight at three separate points on the robot, you can triangulate the position of the source. Each microcontroller uses a dedicated input capture pin to accurately time how long the infrared is visible (as a verification that it was the intended signal) then calculates the time-of-flight of the sound. This data is ready to be transferred back to the localization board upon request.

In addition to all the main functions we have for each microcontroller there are various test programs that can be run on the boards. Our main test program for the robots was a WASD program that basically remote controlled the robots using the transceivers that were on the robots.



Attachments

Step 19: Setting Up the Swarm

In order to gather map information from the swarm, a PC is connected to a Wi.232 development board to communicate with the swarm. Connect the Wi.232 development board to a PC and install the drivers and associated software. To verify the PC can communicate with the robots, the Wi.232 evaluation software can be used. Start the program (downloadable from the bottom of this step), and go to the "chat" tab. Turn on a robot, and packeting data from the robot should appear in the chat window on the PC.

To actually visualize what this data means, use the SOMA observation station program. In order to use this program, glut32.dll is required. This is easily obtainable by searching for it on google. This program displays a map of all the data the robots have gathered in a window. Each robot displays on the screen in a different color. As the robots move around, they display white areas, indicating areas they know are open space. As the robots explore, these lines will become more and more solid with more well defined edges where obstacles such as walls, tables, and other furniture are explored.

Set up each of the robots in an open space all facing the same direction and spaced about two feet apart from each other. This will allow for them to start exploring without getting in each others way too much. Once all robots are set up, turn them on one at a time. After turning on the first robot, it will show up on the PC, but not move. It initially waits for there to be at least two robots before either starts exploring. Once you turn on a second robot, it will also show up on the PC, and the first robot will start exploring. It is possible to keep turning on more robots, as they will join the network and continue exploring.

Step 20: Conclusion

So that's it. The four robots you've seen and grown to love (or hate?) over these last 20 steps are the result of locking six engineers in a room without windows for six months. We somehow retained our sanity... or at least didn't go too much further over the edge. Thanks to everyone who helped us during the project and especially to J.J. Garcia, who funded our work, and to iRobot, who gave us three of our Creates and made our project possible.

2000 engineering hours and this is what we have to show for it...

This video is the conclusion and demonstration from one of our final presentations. The robots are seen to swarm; passing leadership to one another, sweeping their ping sensors, and updating the map on the overhead display.