The Tic Tac Toe Robot plays tic tac toe against a human. The robot using a robotic arm to make its moves. The robot always wins (if the human makes a mistake) or ties (if the human does not make a mistake). The introductory video gives a high level overview of what the robot does and how it does it.
Before game play, the robot prompts the human to clear the game board (removing "tokens" used to mark player moves) and to place the robot's tokens in the "yard" so the arm can reach them. During game play, the robot
- prompts the human to make a move (place a token on the game board) and then indicate completion
- captures an image of the game board
- processes the image to determine the move made by the human
- determines where to move its token
- uses the arm to make its move
- determines whether there is a win or tie
- prompts the human to either move again or indicates a tie or a win by the robot
This set of actions continues until the game is complete or the human indicates a desire to quit.
The overall design is admittedly overly complex, somewhat intentionally. The hardware design is over-engineered, mostly because of a lack of mechanical engineering expertise and evolving requirements. The software design is over-engineered to increase performance, and uses multiple processors, each running multi-threaded software, and all connected via WiFi.
The robot is very challenging to build. That said, it was fun to build, if sometimes exasperating.
There are several components to the robot that will be explained in the steps below:
- the robotic arm used to move the robot's tokens during the game
- the game board upon which the human and the robot place tokens during the game
- a Raspberry Pi implementing visual and tactile aspects of the user interface
- a Raspberry Pi implementing an image capture capability and an audible aspect of the user interface
- an Apple MacBook Pro used for overall coordination, the game logic, and arm manipulation
The steps that follow not only describe how to build the various components and how they work together but also offer the rationale behind design decisions. Thus, makers can examine alternatives, make different decisions, yet still benefit from this instructable.
Step 1: The Robotic Arm (mechanical)
The robotic arm was the motivation for the entire project. I wanted an arm with at least 4 degrees of freedom (DOF) plus a gripper. During the early stages of planning, I hoped to acquire or build a sort of "general purpose" arm that access objects in a somewhat arbitrary three dimensional space. An example task is picking up an object on the floor and placing it on a shelf.
Investigations identified the two major factors in choosing or designing an arm: reach and torque. The former is based on arm geometry, while the latter is based on arm geometry, arm material, and the end load (in this case the token placed on the game board by the robot).
Reach is not difficult to calculate. It requires only reasonably simple geometry. That said, I had to identify some task that would drive reach and torque requirements. The original task was to play checkers, not tic tac toe.
The driving factor for the arm reach in a board game like checkers is the size of the game board. Checkers requires a game board with 8x8 cells. I decided that I wanted to use tokens approximately 1" in diameter for two reasons; first, they should be easy for a human to manipulate; second, since I planned to use image processing to identify the state of the game board, "bigger is better". To allow for room for gripper movement, the board would need roughly 1.75" from cell/token center to center. That meant the span between the minimum reach and maximum reach had to be at least 14".
The token weight (quite low, as I chose wooden discs) and the arm reach provided initial requirements for the robotic arm. There are a lot of pre-built arms or arm kits available. Many can deal with the expected end load from a token. Most of them don't come close to a 14" span. Those that do get close are quite expensive. At that point, I decided to build my own arm, allowing me to customize both reach and end load (thinking ahead to other projects with larger end loads), and to get additional expertise with servos and the complexities of robotic arm design, construction, and use.
I decided to use Actobotics parts from Servo City to build the custom arm, due to previous, and very pleasant, experiences with the company's products and technical support team. Actobotics parts offered a lot of flexibility in terms of arm geometry and material, as well as torque available from the servos. To reduce cost and complexity, I decided to limit the arm to 4 DOF plus the gripper. I call the 4 joints in the arm (see the first picture):
- base (sometimes called shoulder azimuth): rotates 180 degrees in a horizontal plane
- shoulder (sometimes called shoulder elevation): rotates 90 degrees in a vertical plane
- elbow: rotates 180 degrees in a vertical plan
- wrist:rotates 90 degrees in a vertical plane
The actual gripper presented another design decision. It influences both the reach and torque. Assuming Servo City as the supplier, there were three choices. I decided that the parallel gripper offered the most flexibility for a general purpose robotic arm.
Once this basic design was in place, the detailed design required an iterative approach (with help from Servo City tech support) that required calculation of reach and torque for each iteration. Torque is much more difficult to calculate than reach. Without a mechanical engineering background, I searched the net for "the equations". What I found was many different discussions with many different equations, some of which were unusable or just wrong. The bottom line is that for each joint (servo) in the arm one must consider not only static torque (when the joint is not moving), as well as dynamic torque (when the joint is moving); the total torque is the sum of the two. Static torque is easy to understand and almost simple to calculate. Dynamic torque is a bit harder to understand and a bit harder to calculate. The key factor is to use mass, not weight, for both forms of torque. I also concluded it was much easier to use metric units for torque calculations because of the clear distinction between mass and weight in the metric system. The equations I derived are:
Ts (static torque) = M * Ag * L where
- M is the mass being moved
- Ag is acceleration due to gravity
- L is distance of M from joint
Td (dynamic torque) = M * Ar * L^2 where
- M is the mass being moved
- Ar is radial acceleration
- L is distance of M from joint
If these equations are correct (I remain a bit uncertain), for the custom arm, the static torque dominated at each joint. The dynamic torque, with a reasonable acceleration of 45 degrees per second, at most was about 10% of the static torque. In addition, one or more sources I consulted suggested doubling the calculated torque when choosing servos.
Given the relatively minor contribution from dynamic torque, I did some rough calculations of (static) torque to derive the ballpark torque requirements for the servos for four joints in the arm. The requirements mandated servo channel gear boxes for the shoulder and elbow joints. The need to easily mount the base made a bottom flange gear box attractive. To protect the wrist servo, I included a servo block. With these components chosen, the final torque requirements for the servos for four joints in the arm could be calculated. The gripper servo torque was not calculated, but assumed to be very low.
The next task was to choose the specific servos. Brushless servos have lower power requirements and have a long lifetime, plus can be powered with LiPo 2S batteries. That said, they are very expensive. A design compromise uses brushless servos for shoulder, elbow, wrist and gripper, but uses a digital brushed servo for the base. My justification is that the brushless servos will last longer than I will, and could thus be reused in future projects.
The final mechanical design, using an offset approach for the link segment between the elbow and wrist (see the first picture), achieved a span of approximately 15.3". The resulting torque was more than adequate (probably way overkill) for the expected end load.
One final task arises with the arm design, relating to gripping tokens. Since they are circular and only 1/4" thick and won't be placed with precision (a human is involved), I worried about how to ensure alignment so the gripper "fingers", themselves only 1/4" wide, could reliably grip a token. I also worried about how to know how much closure of the gripper would be necessary to ensure a firm grip. I briefly looked at force sensors, but that did not solve the alignment problem. I eventually decided to construct extensions for the gripper with a shape (a sort of cresent moon) that would accommodate token misalignment and with offer some flexibility that would ensure a firm grip without undue stress on the token or the gripper servo. See the eighth picture.
WARNING! It is important to understand that when tic tac toe became the task, the arm requirements changed dramatically. First, since the arm was only used to pick up tokens from a horizontal surface and place them on the same surface, a so-called "palletizing" arm design would work. Second, the required span, assuming the same cell geometry, became 5.25", not 14"; that change should also significantly lower torque requirements. As a result, to just play tic tac toe, you could likely use available arms or arm kits, and certainly create a cheaper design using Acrobatics parts.
The following parts are required for the arm:
Actobotics from Servo City:
- BM-5485HB-180 Servo Gearbox (7:1)
- CM-9380TH-180 Servo Gearbox (7:1, 90 degrees)
- CM-9380TH-180 Servo Gearbox (7:1, 180 degrees)
- 637116 Standard Plain Shaft ServoBlock™ (25T Spline)
- 39486 HSB-9485SH servo [2x]
- 585480 Pattern Bracket C [2x]
- 585494 90° Pattern Mount [3x]
- 545344 Dual Pinch Bolt [4x]
- 545600 Clamping hub
- 545404 Bottom Tapped Pattern Mount B
- 637092 Parallel Gripper Kit A (I had to order a part to fit the 25T spline)
- 632700 5/8"x8" aluminum tube
- 632268 5/8"x6" aluminum tube
- assorted socket head 6-32 screws
Additional parts (available from many suppliers):
- servo extension cables; the number and length needed depends on how far from from the arm the servo controller is located
- "base plate" for mounting the arm; I used a 2'x4' 1/2" plywood board, but anything that will allow the arm to reach to its full extent without tipping over will work
- thin aluminum sheet for gripper
- cable wraps
- cable ties
The last two are optional to provide a cleaner and perhaps more robust design.
The high level instructions for constructing the arm are just a recommendation. Due to the fact that servos shafts must be in a known position for proper alignment of various parts, you will likely have to do some adjustments once you can drive the servos. I certainly did. This includes attaching subassemblies to a servo shaft loosely, positioning servos, measuring angles (tricky by the way), tightening and loosening screws, etc.
I chose to hide the servo cables inside the aluminum tubes linking the joints, mostly for aesthetic reasons. Thus, for the most part, it is best to construct the arm "backwards", from the gripper to the wrist, wrist to elbow, etc. If you choose to leave cables showing, you can construct the arm from base to gripper, which can be easier.
- Mount the BM-5485HB-180 on the "base plate". For my robot, I mounted the gear box in the middle of the 4' dimension, with the "back" of the gear box right on the edge of the base. See the second picture. I used six #4 wood screws to secure the gear box to the board; you may need something different if you don't use a wooden base plate.
- Build the gripper. Use one of the HSB-9485SH servos. See the assembly video.
- Attach the 545404 to the long side of a 585494 using two 1/2" screws, but don't tighten. See the third and fourth pictures.
- Attach the gripper to the 545404 using four 7/16" screws; tighten them and then tighten the screws from the previous step. Again, see the third and fourth pictures.
- Construct the wrist joint using the servo block and one of the HSB-9485SH servos. See the servo block assembly video. IMPORTANT! Before you insert the servo into the servo block, you must attach a 545344 to the side of the block towards the elbow using four 5/8" screws, because once the servo is in place it is impossible to access the screws.
- Mount the 545600 onto the 585494 using four 3/8" screws, but do not tighten. Slip the 545600 onto the shaft of the wrist servo block. Align the gripper (remember warning above), then tighten the 3/8" screws and the screws in the 545600. See the fourth picture.
- Push servo cables thru the 545344 on the wrist servo block. See the fourth picture. Connect an appropriate servo extension cable to the gripper servo and to the wrist servo. Push the cables thru the 8" aluminum tube and insert the tube into the 545344. Tighten the clamping screws on the 545344. NOTE: It is probably a good idea to attach small labels to the servo cables as it becomes difficult to determine which is which once the arm is complete.
- Attach a 545344 to the short side of a 585494 using four 5/8" screws; do not tighten. Route the servo cables for the wrist and gripper thru the 545344. Insert the 8" aluminum into the 545344. Rotate the tube so that the edge of the long side of the 585494 is parallel to the back of the wrist servo. Admittedly this can be quite difficult. Now tighten the clamping screws and the 5/8" screws in the 585344.
- Attach a 545344 to a 585480 using four 5/8" screws; note that these screws are not accessible after the gearbox is inserted. Attach the CM-9380TH-180 Servo Gearbox (180 degrees) for the elbow to the 585480 using at least four (I used eight) 5/16" screws. See the fifth picture.
- Route the servo cables for the gripper, the wrist, and the elbow thru the hole in the 585480. See the fifth picture. Connect an appropriate servo extension cable to the the elbow servo.
- Attach the 585494 from the previous step to the CM-9380TH-180 Servo Gearbox (180 degrees) for the elbow using four 7/16" screws. The servo must be positioned at some known angle, e.g., 0, 90, 35, or 180 degrees to facilitate proper alignment. See the fifth picture.
- Route the servo cables for the gripper, the wrist, and the elbow thru the 6" aluminum tube.
- Attach a 585480 to the base gearbox using four 1/4" screws. Attach the CM-9380TH-180 Servo Gearbox (90 degrees) for the shoulder to the 585480 using at least four (I used eight) 5/16" screws. See the sixth picture. Note that the base servo should be positioned at some well known angle, e.g., 90 degrees to ensure alignment.
- Attach a 545344 to the short side of a 585494 using four 5/8" screws. Route the servo cables for the gripper, the wrist, and the elbow thru the 545344. Insert the 6" aluminum tube into the 545344. Make sure the top edge of the 585494 is parallel to the top surface of the 585480 at the elbow joint. This too is difficult. Tighten the clamping screws in the 545344. Again see the sixth picture.
- Attach the 585494 to the shoulder gearbox using four 7/16" screws. The shoulder servo should be positioned at some known angle, e.g., 0, 45, or 90 degrees for proper alignment. See the first and sixth pictures.
- You can now use the optional cable ties and wraps to "clean up" the nest of cables. See the first picture.
Add the gripper extensions.
- Design the gripper extension that suits your needs, and that can fit on the gripper "fingers". My design used an extension 1" wide x 3/8" high.
- Cut aluminum sheets for the gripper extensions. For my design, the resulting piece size is 1"x1.75" (see the seventh picture).
- Make a small cut to free a 3/8" high piece on each size of the piece. This leaves the extension attached by 1/4" of aluminum, i.e., the thickness of the gripper fingers.
- Cut 3/8"x1.75" (reduced by the kerf size of the cut in step 3) of material on one side of the piece. Do the same to the other piece.
- Drill holes for 6/32 screws so that piece can be attached to the gripper fingers.
- Carefully bend the remaining side of the piece to a 90 degree angle.
- Attach the extensions to the gripper fingers using 6/32 screws and nuts.
- Bend the extension to produce the crescent shape. See the eighth picture.
Step 2: The Robotic Arm (electrical)
There are obviously many ways to produce a servo PWM signal. All have pluses and minuses. Prior to planning the robot, I'd purchased some Pololu Maestro servo controllers. Given that I only needed to drive five servos, the Micro Maestro seemed a good choice.
I originally hoped to have multiple servos running simultaneously for maximum performance. Thus the driver had to deliver as much as 8 amps. Checking with Pololu tech support, I found that the header pins on the board would support only about 3 amps, tho the traces on the board would support up to 6 amp. I removed the header pins and soldered some 14 gauge wire directly to the traces.
Servo City tech support recommended using a battery, especially LiPo, since it can deliver as much current as needed (unlike a power supply, which has a maximum). I chose a high capacity (7.5 Ah) LiPo 2S battery so that I could run multiple servos for about an hour.
I also decided to put in a simple SPST switch to make sure I could kill power when things went wrong. And things did go wrong (I have some bent parts as a result). It also proved a lot more convenient than unplugging the battery when power was no longer needed.
Jumping ahead, after a long period of testing while infrequently checking the voltage to measure discharge, I drained a LiPo battery to the point where it died. The servos were running fine at one point in time, and 2 minutes later, they weren't running at all. I decided to install a cheap voltmeter so that I could continuously monitor the battery voltage and thus charge.
You can see the total circuit in the first picture. The second picture shows a close-up of the Maestro, with servo cables plugged in, the switch, and the meter.
I chose to place the base on Maestro channel 0, the shoulder on channel 1, the elbow on channel 2, and the wrist on channel 3. I found that the servo cable connector housings were "fat" enough that it became hard to put a cable on channel 4, so I placed the gripper on channel 5.
The design of the electrical portion of the arm is another bit of over-engineering that required extra cost and extra effort. It turns out that for reasons of simplicity and safety I never drove more than one servo at a time. Plus the brushless servos that do most of the work really are efficient. Thus, I probably did not need to modify the Maestro. More importantly, I could have saved some money by buying lower capacity batteries; I've run the robot for several hours over a period of a few days with apparently little impact on the battery charge.
The parts list reflects what I did. You may wish to use a simpler design.
- Pololu Micro Maestro, available from Servo City as well as Pololu
- 14 gauge stranded wire, from a spare electrical cord
- SPST switch, anything that will handle up to 6 amps at 12 VDC
- LiPo 2S battery, I used this one but per the above discussion, a much cheaper battery seems acceptable
- T connectors, I got mine here but per the above discussion, a different style could be needed for a different battery
- voltmeter, I got mine from Sparkfun [optional]
- connection wire
- resistors [optional]
- JST connector (male, female) [optional]
- various hobby wood supplies, e.g., 1/8" plywood, square rods of different sizes [optional]
- USB A to USB mini-B cable
- miscellaneous wood screws
- cable clamps [optional]
- heat shrink tubing [optional]
The following steps describe what I did to make construct the arm electrical system in a way that allows maximum reuse of key parts, plus provides stability of the parts when moving the robot to facilitate construction, documentation and actual play. You can certainly do something different.
- Solder the T connector to the battery leads. This was quite difficult to do. I think that a different form of connector would have made the job a lot easier. If you wish, use heat shrink tubing to protect the solder connections.
- Desolder the power header pins from the Micro Maestro.
- Solder a length of two conductor 14 gauge wire (one conductor for +V and one for ground) to the power port on the Maestro. The length depends on where you position battery (see below).
- Create a mount for the Maestro so that the Maestro is elevated enough to allow plugging in the USB cable without difficulty; it should also keep the Maestro highly stable to allow for plugging and unplugging the servo connectors. I used a 1/4" thick board cut to roughly the same size as the Maestro. You will likely have to do some carving to allow the Maestro to rest flat on the mount.
- Glue the Maestro mount to a piece of plywood (I used 1/8") large enough to include the switch housing (see below). This becomes the "module plate" for the servo controller. See the second picture.
- Create a switch housing. The size of this housing depends on the size of the SPST switch. I built mine with two vertical side pieces of 1/4" board and a top plate holding the switch from 1/8" plywood. You must of course drill a hole for the switch in the plywood.
- Mount the switch in the top plate. Attach the plate to the two sides with two small wood screws.
- Glue the switch housing to the "module plate" at the desired distance from the Maestro mount.
- Attach the "module plate" to the base plate using wood screws of the proper length (depends on thickness of base plate).
- Determine where you wish to position the battery. I created a "fence" to prevent the battery from moving. I used 4 pieces of 1/2" square wooden rods attached to the base plate with wood screws. See the first picture.
- Attach the Maestro to the mount on the "module plate" using wood screws.
- Cut the ground conductor in the 14 gauge power wire attached to the Maestro to the proper length so that when soldered to a T connector, the T connector plugs into the battery without too much tension or slack.
- Cut the +V connector in the 14 gauge power wire attached to the Maestro to the proper length to reach one terminal of the switch mounted its housing. Remove the switch from the housing and then solder the wire to the terminal.
- Solder the left over piece of 14 gauge wire to the other switch terminal. Cut the wire so that the the +V and ground conductors are aligned at the battery end. Solder the conductors to the T connector. You can use heat shrink tubing here as well.
- Attach the switch top plate.
- Plug the USB cable into the Maestro.
- I used cable clamps to provide strain relief on the USB cable to eliminate any possible damage to the Maestro.
At this point you can now plug the Maestro into the battery to power the servos in the arm. The switch allows you to turn power on/off without plugging and unplugging the battery.
I installed the voltmeter long after the initial construction, so it required a few extra steps that would be unneeded if done initially. I made the voltmeter pluggable, but of course it is not necessary. Since the voltmeter I used is 0-5V and the battery is at least 7.4V, I had to include a voltage divider. If you want to use the optional voltmeter:
- Measure the internal resistance of the meter. Mine was surprisingly low -- 4.86 Kohms.
- It made sense to simply divide the incoming voltage by two. Your needs may be different. Create an appropriate voltage divider circuit based on the internal meter resistance. With a very high meter resistance you can have two equal resistors and simply measure the voltage across one of them. With a low meter resistance the meter in effect becomes one of the two resistors. I used a 4.7 Kohm as in series with the meter; I used a 150 Kohm resistor in parallel with the meter to get closer to 4.7 Kohm for the "second resistor".
- Attach a ground wire to the ground terminal of the meter. Attach a +V wire to the incoming lead of the 4.7 Kohm resistor (use heat shrink if you wish).
- Attach a one side of a JST connector to the ground and +V wires.
- Solder a +V wire to the Maestro side of the SPST switch.
- I used a prebuilt prototyping wire with a female connector to attach to a ground wire to the ground header pin on the Maestro.
- Attach the other side of a JST connector to the ground and +V wires.
- Plug the JST connector sides together.
If you wish, you can built a small "case" for the meter. I used 1/4" boards and 1/8" plywood. I simply glued the two pieces of plywood to the boards. I screwed the meter to the plywood using wood screws.
Step 3: The Robotic Arm (software)
I wanted to drive the Maestro from either a Raspberry Pi or my Mac. Pololu provides driver code only for the Arduino and Windows. I considered porting the source code to the Pi, leveraging the Pololu USB Software Development Kit. While I'd not done any C programming on the Pi, at least the system seemed like it might support the effort (there now appears to be some support for Linux). In the end, I used my Mac, in particular because I already had Parallels running Windows 8.1. At least then I could use the Maestro Control Center (Windows only) via my Mac for testing. Pololu offers an excellent user's guide for the Maestro and the Control Center.
I was able to install and use the Control Center in my MacOS/Parallels/Windows environment with no problems. The Control Center has a little bit of a learning curve, but it proved to be an extremely useful tool in both assisting in alignment while building the arm, in determining configuration information for the servos, and in doing testing of all sorts.
I did not, however, want to use Windows for the real robot because I did not want to write/port C code, and I already had experience in image processing (a key piece of the robot discussed later) using OpenCV in a Java environment. So, I went looking for Java support for the Maestro. In a Pololu forum, I found someone who had used the Maestro development kit to create a Java driver for MacOS. He was kind enough to give me access to his code, which leveraged the so-called usb4java.
I downloaded and installed the usb4java jar files into my Eclipse environment and proceeded to test it successfully. I confess that after a few hours of trying to understand the Java Maestro driver I gave up and reverse engineered that code, with much help from the USB specific code from the Pololu development kit, and wrote my own Java driver.
Tuning servo control
Once the driver worked, I started tuning the individual servo positioning. This involved leveraging the Maestro Control Center to determine the proper minimum and maximum pulse width limits. For example, the base servo in theory could rotate about 180 degrees. I concluded I really only needed 150 degrees total, but of course I wanted to make that symmetric around some theoretical 90 degree angle perpendicular to the base plate. So, I would drive the servo to achieve 90 degrees (and align the mechanical parts to get a real 90 degrees) and then move to angles + or - 30 degrees from there. Similar tuning (including mechanical) had to be done for the other servos, but using 0 degrees as a starting point and then moving to either 90 or 180 degrees as appropriate. The minimum and maximum pulse width limits got calculated by adding or subtracting (as appropriate) from the desired minimum and maximum angles to be achieved. This tuning provided a set of configuration information used to configure the Maestro each time the robot powered up.
During tuning, I discovered a potential source of trouble. Upon applying initial power the Maestro (plugging the USB cable into the Mac), the Control Center applies a "startup" PWM signal to each servo that is enabled. Depending on which channels are enabled and the resulting "startup" position of the gripper tip, crashes can occur; I have a couple of bent parts to prove it. I advise caution. I determined that setting a "startup" for the shoulder at an almost vertical angle eliminated (or at least greatly reduced) the problem.
Gripper positioning (kinematics)
Once the driver worked, I started worrying about the details of positioning the arm appropriately to pick up and drop tokens as needed. I needed a known position for the "game board" for playing tic tac toe. It had to be easily accessible to the human, and of course reachable by the arm. I also needed a known place for the robot's tokens to reside prior to play. For reasons I can't remember, I call that place the "bone yard" or just the "yard". See the first picture of where the game board and yard were placed on the base plate. I'll cover more on the actual creation and placement in another step.
The kinematics of positioning the gripper of a robotic arm can get quite complex. There are essentially two forms. With inverse kinematics (IK), one starts with the desired position of the gripper tip; with knowledge of the lengths of the segments between each joint one can use trigonometric calculations to calculate the angle required for each joint to achieve the position. With forward kinematics (FK), one starts with the knowedge of the lengths of the segments between each joint; one iterates the joints thru different angles to eventually converge on the desired position of the gripper tip.
I strongly desired using IK to get to positions in the cells in the game board and yard. Some aspects of the design, in particular the fact that the gripper should be vertical to the target cell, simplified my IK equations. I confirmed my IK equations using an admittedly crude modeling approach. I then tested IK with the arm using the Control Center and eventually my own software. I found that the position errors existed in all three dimensions. The errors were beyond the error tolerance I believed necessary (about 0.1"). I am still not certain of exactly why, but I believe the positioning errors result from inaccuracies in measuring the segment lengths, rotational alignment errors during construction (it is really hard to ensure two surfaces/edges are exactly parallel when they are separated by several inches), inaccuracies in measuring the actual angle of a joint when it is supposed to be at a known angle, inaccuracies in the expected or measured position of cells relative to the arm, and so on. In any case, I had to abandon IK. FK seemed simply too slow, and in fact likely to suffer from the same physical problems as IK.
At a result of the failure of IK and the presumed failure of FK, I was forced to take an empirical approach to gripper positioning. I used IK results to get the gripper tip "close" to the center of the desired cell in either the game board or the yard. I then sort of used FK and manually iterated the joints angles to achieve the desired position. I recorded the resulting 3x3 array of angles for the game board cells and the 1x4 array of angles for the yard cells as constants in the code. Thus there is no calculation at runtime; those constants get used to position the gripper for any cell.
One physical phenomenon during tuning deserves special mention. I observed some hysteresis in servo positioning (in particular with the base servo). Approaching a specific logical target position resulted in different physical positions depending on the direction of travel. As a result, the implementation assures that for any target angle for any servo, the target is always approached from the same direction. This causes some interesting looking behavior but does greatly increase positioning accuracy.
Another suspected, but not proven, source of position errors comes from the speed at which a servo approaches a position. As a result of trying to find a balance of performance and accuracy, any movement greater than some distance (target position - current position) results in a two or three phase movement. First, if necessary, is a fast movement to a position that assures the target is approached from the proper direction. If the approach direction is already proper, a fast movement is used to get within a few degrees of the target. Finally comes a slow movement to the target.
One final note on positioning. The critical nature of the shoulder angle was mentioned above. Recognition of this fact led to a codified order of moving the joints. The shoulder always has to be "close enough" to vertical before any other joint moves. Further, once the shoulder position is "OK" the order of servo movement to approach a target position is base, elbow, wrist, shoulder, gripper.
I will not claim this code is the best way to achieve the desired results, nor necessarily even a good way. But the code does work. I tried to do decent documentation, but there is never enough. Further, I have not completely removed all code related to failed designs or debugging activity, so it may be necessary to ignore some aspects.
The Java code for driving the robotic arm (really the servos in the arm) can be found in a git hub repository.The code is organized in what amounts to an Eclipse project (PololuMaestroControl).
The model includes an interface (Servo) that represents a device independent (well mostly) servo with a few methods for setting/getting a target position, getting the actual position, and setting/getting a configuration.
The configuration class (ServoConfig) is key to the operation of the servo, and causes some device dependence to show up, as it leverages the nice features built into the Maestro that might not appear in other controllers. The configuration exposes the ability to control acceleration and speed for a channel as well as the ability to control the minimum and maximum pulse width sent to a channel in a Maestro dependent fashion. The configuration also includes device independent information in the form of a minimum and maximum angle ("degrees") that corresponds to the minimum and maximum pulse width. The latter allows the target position in the Servo interface to be in "degrees".
The servo implementation class (ServoViaMaestro) manifests the Servo interface. ServoViaMaestro uses the static class PololuMaestroMicro to access the Maestro via USB. PololuMaestroMicro exposes Maestro specific methods, but hides all of the messy details such as the nature of USB communication or Maestro constants.
At the highest level, the model includes a singleton class (ArmTTT) that represents the robotic arm. The arm has five servos, one each for the base, shoulder, elbow, wrist, and gripper. ArmTTT encodes all of the configuration constants and positioning constants mentioned above. At class initialization, the servos are configured per the constants and then optionally positioned at at a "safe" location. The servos get driven in a particular order (e.g., shoulder to a near-vertical angle first) to ensure no part of the arm contacts the base plate or anything else. ArmTTT has a few public methods:
- goYard() causes the gripper to be positioned in the yard over the center (one hopes) of the indicated cell. The gripper is positioned vertically to facilitate picking a token in the yard.
- goGame() causes the gripper to be positioned in the game board over the center (one hopes) of the indicated cell. The gripper is positioned vertically to facilitate dropping a token on the game board.
- goNeutral() cause the arm to be positioned such that it does not interfere with the human player making a move. As define in ArmTTT this is an intermediate position between the game board and yard; it was chosen to in theory minimize unnecessary movement.
- setGripper() allows opening (to the extent necessary to allow placement of the gripper jaws without interference) and closing (to the full extent)
This simple set of methods removes the majority of complexity arm positioning from a user of ArmTTT. This allows the user to focus on higher level logic, such as where to move and when.
Step 4: The Game Board, Yard, and Tokens
Tic tac toe is played on a 3 cell x 3 cell game board. The game board is typically created simply with two vertical lines and two horizontal lines, so that it looks something like this: #. As described in a previous step, the cell size necessary for using the robotic arm is 1.75" to eliminate any possible gripper interference. Thus the simplest possible game board for the robot would appear like # with the vertical and horizontal lines 5.25" long.
Checkers is played with checkers (or tokens), so from the beginning I planned to use wooden discs as tokens. For tic tac toe the tokens have to represent a O or X, also sometimes called a naught or cross. I decided that since a human is involved, the tokens should actually look like a O or X.
It is pretty easy to print the simple game board described above. It possible to draw or paint O and X on tokens, or print O and X and paste them on tokens. That was my initial approach.
However, one of critical aspects of the robot is the image processing (using OpenCV in Java) required to understand the move (token placement on the game board) made by the human during a game. The modest experience I've had with image processing suggests it is a lot harder than one might think, and that it is a good idea to do anything possible to make it easier.
OpenCV is pretty good at recognizing "features" such as corners, lines, and circles. I decided to add some features to the game board to help with finding the board in the captured image; I added some registration lines to create corners around the game board. I also hypothesized that humans, being typically sloppy, would, without guidance, place tokens in a way that could potentially cause failures in finding the tokens. I added some registration circles within the cells to establish boundaries within which tokens are to be placed. As a result, the game board looks like the first picture.
The so-called yard containing the robot's tokens requires a 1x4 cell "board". However, since there is no image processing involved, there is no need for the registration lines. Since a human must place the tokens in the yard prior to play, the token placement requirements are similar, and thus the yard too has registration circles. You can see the yard in pictures in previous steps.
OpenCV is also good at detecting contrasting colors. A good combination seemed to be a black O or X on a red background. Since the token is circular, the game would have red circles (containing a black symbol) on the white game board.
To better leverage feature detection, the O gets rendered as a circle. The X gets rendered as a +. See the second picture.
I created the game board, the yard, and tokens in MacOS Pages to get the needed precision, in particular the game board at 5.25"x5.25" and a tokens with a diameter of 1". Another program offering the necessary precision should work as well.
I initially printed the game board, the yard, and tokens using my 6-year-old ink jet printer using decent paper. I concluded, after much testing, that the fuzzy boundaries and the poor color rendition that resulted could not be tolerated. I ended taking the artifacts to a local print shop and got high quality laser printing on business card stock for about $2.
- wooden discs
- high quality printer paper (if you print your own artifacts)
- Using your favorite drawing tool create a game board similar to the first picture, a yard, and tokens as in the second picture. You must have reasonably precise dimensions. Note that you only need one game board and one yard. You need at least four of each token type.
- Print the artifacts. If you have a high quality printer, great. If not, printing at a professional shop seems pretty cheap.
- Cut out the game board. Place the centerline of the game board aligned with the centerline of the base servo gearbox and roughly 3" from the far edge of the base plate. I used masking tape to attach the game board to the base plate. See the third picture.
- Cut out the yard. Place the centerline of the yard on a line defined with the base servo positioned at 150 degrees. The closest cell of the yard should be 12" from the base servo axis. I used masking tape to attach the yard to the base plate. See the third picture.
- Cut out token images for at least four of each token type. I recommend getting as close to a circle as possible.
- Glue the token images onto wooden discs. I used Elmer's wood glue. Center the token images on the discs as much as possible. Let the glue dry before proceeding.
- Trim any excess paper from the tokens. My wooden discs were slightly under 1" diameter, but they were round. I used an X-acto knife and needle files to trim the excess paper to make sure the resulting tokens were round; this aids in image processing.
Step 5: The Camera Pi (electronics and Tower)
As related in the introduction, this major component performs two important functions. Primarily, it captures an image of the game board. The image is used to determine the move made by the human during a game. Secondarily, the component supplies an audible aspect of the user interface, i.e., a beeper used to warn the human that the arm is about to move or is moving; this adds a layer of safety. I felt this was necessary after seeing the damage to metal parts caused by the arm.
The reason for using a Raspberry Pi to run a camera is pretty straightforward. I already had a Pi, the camera is cheap, there is a tremendous amount of support for the combination, and I had previous experience with the combination. Other designs could work just as well.
The reason for using the same Pi to supply a beeper is less straightforward. A beeper would more naturally be included in the user interface PI component (see below). However, by the time I decided the project really needed a beeper, the UI Pi component was already built. Mounting the additional electronics looked difficult at best. It also seemed to make sense to place the beeper where it would be the most obnoxious, i.e., on the camera tower.
I did not want to require wall power for the robot. Thus for the Pi, I decided to use a power bank. I found that 1 Amp was sufficient for the camera Pi.
To capture images of the game board, the camera has to be mounted on some sort of tower above and perpendicular to the game board (I did not want to deal with image distortion resulting from capturing images at an angle other than 90 degrees). The minimum focal length of the Raspberry Pi camera (V1) is 1 meter. At that distance, the field of view is 1 m x 0.67 m. Previous experience with the camera suggested, however, that usable images can be captured close as 0.5 m. Additional analysis of various arm positions and a desire to keep the tower as short as possible led to a compromise tower height of 24".
Placement of the tower had to insure there was no interference during arm movement and that the entire game board got captured in any image (even one for checkers, with a much larger game board). This put the centerline of the tower 4" from the centerline of the base servo gearbox. The lens of the camera has to be 16" from the back of the tower. The first picture shows the overall structure of the tower. The second picture shows a closer look at mounting the Pi and the camera on the tower; the beeper is present but not visible.
- Raspberry Pi (I used a first generation B, but any model supporting the camera should work)
- USB Wifi dongle for the Pi
- Raspberry Pi case (at least the lower half)
- Raspberry Pi camera (I used a V1, but the V2 should work)
- Power bank (I used a 10 Ah unit with 1 A and 2.1 A outputs)
- USB A to USB micro cable
- buzzer (I used this, but others would certainly work)
- 2N3904 transistor
- 2.2 Kohm resistor
- small circuit board
- hookup wire
- jumper wires
- 1/8" plywood
- assorted wood screws
- assorted machine screws and nuts
- cable ties (optional)
- wooden square rods for securing the power bank [optional]
The following Acrobatics parts are required:
- 585444 4.5" aluminum channel
- 585486 13.5" aluminum channel
- 585466 24" aluminum channel
- 545532 channel connector plate pair [2x]
- 585420 7.7" aluminum beam pair [2x]
- assorted 6-32 socket head screws and nuts
Build the tower (reference the first and second picture):
- Attach the 585444 to plywood base plate so that the long axis is 4" from and parallel to the long axis of the base servo gearbox and the back edge of the 585444 is a bit over 1.5" from the edge of the base plate. Use four #4 wood screws.
- Position the 585466 vertically with the open side towards the back of the 585444. Use a pair of 545532 to connect the two pieces of channel. Use eight socket head screws.
- Take a 585420 and try to position it so that it forms about a 45 degree angle with the channel (you probably can't get too close to 45). Make sure that the ends of the beam align with a hole in each piece of channel. Attach the beam to the channel pieces using screws and nuts; don't tighten the screws. Using the first beam as a guide for placement, attach a second beam. Tighten the screws in both beams.
- Attach a pair of 545532 to the top of the 585466 using four screws. Don't tighten the screws. Position the 585486 open side down so that it aligns with the top of the 585466. Attach it to the 545532 with four screws. Now tighten all eight screws.
- Take a 585420 and try to position it so that it forms about a 45 degree angle with the channel (you probably can't get too close to 45). Make sure that the ends of the beam align with a hole in each piece of channel. Attach the beam to the channel pieces using screws and nuts; don't tighten the screws. Using the first beam as a guide for placement, attach a second beam. Tighten the screws in both beams.
Mount the camera and Pi:
- Cut a piece of 1/8" plywood about 3"x1". Cut/drill a hole close to one end so that the Pi camera lens fits thru the hole.
- Attach the camera to the plywood using #2 wood screws or machine screws and nuts (I used the latter). NOTE: depending on your construction, you may have to substitute a longer camera cable for the standard cable; if so do this before you attach the camera.
- Attach the plywood to the top horizontal channel (585486) so that the camera lens is 16" from the back edge of the tower. Use #6 screws and nuts.
- Find some machine screws with a low profile head and mount the bottom half of a Pi case as close to the camera end of the horizontal channel as possible, and with the USB opening towards the camera. Insert the Pi.
- Attach the camera cable to the Pi. If you wish to use the top of the case, route the camera cable thru the case top prior to attaching the cable to the Pi, then close the case.
- Plug the Wifi dongle into the Pi. Plug in the USB cable into the Pi.
- [optional] Use cable ties to secure the USB cable to the tower, down to the power bank.
Build the beeper circuit based on the circuit diagram shown in the third picture. The transistor is required to ensure the the maximum GPIO pin current is not exceeded. The way I built it:
- Cut a small piece of prototype circuit board so that it fits inside the horizontal channel.
- Solder the components to the board and connect them as shown in the third picture. The power, ground, and signal wires must come up thru the channel to attach to the GPIO pins, and further must have a female connector to plug into the GPIO head.
- Attach circuit to the channel. I used a custom wood spacer and #2 machine screws and nuts.
- Route the power, ground, and signal wires up thru the channel and attach to the indicated GPIO pins.
Rather than risk the power bank slipping around, I secured it using square wooden rods. See the first picture.
- Cut a piece of square rod long enough to secure the end of the power bank without the USB ports. Notch the rod to fit around the power bank enough to hold it steady. Position the power bank "conveniently" and attach the notched rod to the base plate with wood screws.
- Cut a square rod long enough to secure the end of the power bank with the USB ports, and of the right size to allow the USB cable to plug into the power bank. The rod location should allow the power bank to be inserted and removed easily if it can't be charged place. Attached the rod to the base plate with wood screws.
Step 6: The Camera Pi Software
I will not delve into setting up a Raspberry Pi, enabling the camera, or setting up Wifi. Those topics are covered in depth by many sources.
The picture offers high level description of the code running on the camera Pi. There are two independent processes (P in the picture), one for capturing images and one for controlling the beeper. Each of these processes gets started by running a shell script.
Both processes are in effect "servers" in that they listen for commands and then do something in response to the commands.
The camera process listens for commands (C1 in the picture) and acts on them. One command allows a caller to set the "white balance" for the camera. This is necessary only because of varying lighting conditions under which I did testing. This command is asynchronous. The caller does not wait for a response. Note that the shell script that starts this process accepts an argument that sets the white balance when the process starts.
Another command causes the Pi to capture an image. It does so, and then returns first the size of the image, and then the captured image itself (C2 in the picture). This is a synchronous interaction. The sender of the capture command is expected to wait for the response.
With the camera at 24" above the game board, the field of view of the camera is 24" x 18". The tic tac toe game board is only 5.25" x 5.25". That means the game board is about 6% of the full image. For this reason, I decided to capture the highest resolution possible, 2592 x 1944 pixels.
To maintain as much quality as possible in captured images, I tried both PNG (no compression) and the highest quality JPEG formats. A PNG is roughly 8 MB and a JPEG is roughly 3.5 MB. Given the speed at which a relatively low performance processor can process and transmit data, JPEG proved a better choice.
Even 3.5 MB images take a long time to transmit. I did some experiments in clipping the image on the Pi (in Python) and then transmitting only the relevant 6% containing the game board. That took even longer than simply transmitting the entire image. So the final code transmits the entire image. It is possible that with the increased computational power of the Raspberry Pi V3 for example, the situation would be different.
The beeper process has two threads (T in the picture). The command interpreter thread listens for commands to start and stop the beeper (B1 in the picture). This is an asynchronous interaction. The caller does not wait for a response.
As a result of receiving a command, the command interpreter thread sends the command to the beep thread (B2 in the picture). The beep thread is "episodic". In this case, the "episode" has a period with the physical buzzer on and a period with the physical buzzer off. This in effect produces a series of beeps rather than a continuous "buzz". Upon receiving a start command, the beep thread repeats episodes until commanded to stop. Upon received a stop command, the beep thread turns off the physical buzzer and stops repeating episodes.
The Python code for the camera and beeper processes, plus the shell scripts for starting the process can be found in a git hub repository.
Note that the camera and beeper processes are written in Python 3.
The shell script armCamera.sh starts the camera process. The camera process runs the Python program image-sender-param.py. Note that the script expects the program to be in a particular folder. You can put the program somewhere else, but you must then modify the script.
image-sender-param.py accepts a parameter that sets the white balance for image capture. The program creates an instance of a helper class (SendingServer.py) to deal with all of the network interaction, i.e., waiting for a command and returning an image. image-sender-param.py only has to deal with interpreting commands and image capture.
The shell script armBeeper.sh starts the beeper process. The beeper process runs the Python program beeperServer.py. Note that the script expects the program to be in a particular folder. You can put the program somewhere else, but you must then modify the script.
beeperServer.py creates an instance of SendingServer.py to deal with all of the network interaction, i.e., waiting for commands. beeperServer.py only has to deal with turning the beeper on and off. The beeper thread is implemented in a helper class (beeper.py). That class is a subclass of another helper class episodic_thread.py, which implements the episodic behavior describe above.
Step 7: The User Interface Pi (electronics and Case)
As related in the introduction, this major component supplies the visual and tactile aspects of the user interface. A 16x2 RGB LCD is used to display text prompts, warnings, or simply information. The text can be in one of several colors, per the capabilities of the LCD. I decided that warnings needed to blink to better attract the attention of the human player.
Two push button switches allow the human to respond to prompts or indicate completion of moves, etc.
The reason for using a Raspberry Pi to run the LCD is pretty straightforward. I already had the Pi and the LCD, and the supplier of the LCD (Adafruit) offers an excellent tutorial that shows how to connect it to the Pi. Adafruit also offered a library for use with the Pi. Other designs could work just as well.
Since the power bank I used offers two output ports, I used that power bank for the user interface Pi as well. The bank's 2.5A port was more than adequate.
I decided to make a "case" for a couple of reasons. First, I wanted to maintain fixed positioning between the Pi and LCD so that interconnects didn't get stressed during potential movements. Second, I wanted to maximize usability in terms of viewing the display and pushing buttons; I felt that some sort of slant on the display would accomplish that. I built my "case" with wood. Other materials can of course be used. The first picture shows the resulting design.
Once I had a rough design for the case, I looked at maximizing reuse of the LDC and switches. I also needed a place to mount the LCD potentiometer. Thus I decided to use a circuit board between the Pi and the rest of the electronics.
The second picture shows an admittedly crude representation of the circuit board. At the top is the LCD. The two gray rectangles underneath the LCD represent multi-pin female connectors (created from female header strips). At the bottom is the Pi. In between are three rectangles. The bottom two represent the electrically connected pin holes on a prototype board. The top represents the +V/Gnd rails on a prototype board. At the top right are representations of the two switches and the potentiometer for contrast control of the LCD.
- Raspberry Pi (probably any model would work; I used a V1 B+)
- Raspberry Pi case
- USB Wifi dongle for the Pi
- 16x2 LCD from Adafruit [NOTE: the current product is a new version of what I used]
- Pushbutton (momentary contact, SPST) [2x]
- small circuit board (I used this one)
- headers strips (male/female)
- heat shrink tubing
- spacers and nuts and screws
- 1/8" plywood
- 1/4" board
- 1/4" and 1/2" square wood rods
- assorted machine screws and nuts
- assorted wood screws
Build the case. This assumes building a wooden case like mine.
- Cut a piece of 1/8" plywood large enough for a base to hold the Pi case, the circuit board, and supports for the LCD. Note in particular you may need more space than you think due to header pins on the LCD. Mine is 5.5"x4", but it probably should have been bigger; this would have allowed the beeper to be added.
- Cut two pieces of 1/4" square wooden rod the length of the base (e.g. 5.5") and glue them to the underside of the base. This gives room for screw/nuts on the underside.
- Decide where to mount the Pi case and the circuit board so that there is enough room for wiring between the Pi and the circuit board and the circuit board and the LCD. I placed the Pi case right at the back edge of the base and the circuit board 3/8" from the edge of the Pi case. Drill holes in the base to attach the case to the base.
- Cut a piece of 1/8" plywood large enough for a face to hold the LCD and switches. Mine is 4"x3". Cut a hole so the LCD slips thru the face. Drill mounting holes appropriately. Drill holes of a size appropriate to mount the pushbuttons. I positioned the buttons so they were at the left and right edge of the display.
- [Optional] Paint the face.
- Determine an "optimal" angle for mounting the face on the base; I chose 45 degrees. This is mostly about ease of viewing the text while playing a game. Also important is the height of the face above the base. This influences the difficulty of making connections to the LCD and buttons; I chose 1".
- Cut two pieces of 1/4" board 1" wide so that one end is cut square and the other end has the angle determined in the previous step.
- Glue the angled pieces to the front of the base (away from the end where the Pi is mounted.
- Drill mounting holes in the face and the angled pieces so that wood screws can attach the face to the angled pieces glued to the base. Do not yet attach the face.
Build the circuit board.
- Make the connections between the Pi GPIO pins and the circuit board. I used some ribbon cable with female headers to simplify plugging into the Pi. The length of the wires depends on your design.
- Make the connections from the circuit board to the LCD. I used a female header into which to plug the potentiometer. I used eight pin and six pin female header connectors to plug into header pins on the LCD.
- Make the connection for the switches. I put two position female headers on the circuit board.
- Wire the switches. I used a two pin male header.
- Plug in the LCD potentiometer.
- Attach the bottom of the Pi case to the base using machine screws and nuts.
- Insert the Pi into the case.
- Plug in the Wifi dongle.
- [Optional] Attach the top of the Pi case.
- Using spacers as necessary, attach the circuit board to the base using the appropriate technique. I used metal spacers with the corresponding nuts and screws.
- Plug the connecting wires into the Pi GPIO pins. See the second picture.
- Mount the LCD in the face using machine screws and nuts.
- Mount the switches in the face.
- Plug the circuit board connectors into the LCD.
- Plug the switches into the circuit board.
- Mount the fact on the angled pieces on the base using wood screws.
Step 8: The User Interface Pi (software)
I will not delve into setting up a Raspberry Pi or setting up Wifi. Those topics are covered in depth by many sources.
The picture offers high level description of the code running on the user interface Pi. There a single process (P in the picture) that is in effect a "server" that listens for commands and does something in response to a command. The process has two threads (T in the picture). The command interpreter thread listens for commands to display a UI message on the LCD (U1 in the picture). This initiates a synchronous interaction. The command interpreter displays the message. If the command requests blinking, the interpreter initiates the blink thread to blank and un-blank the display periodically (a blank/un-blank pair is one episode). If the command requests waiting for a button press it does so before returning to the caller, otherwise it returns immediately.
The Python code for the user interface process, plus the shell script for starting the process can be found in a git hub repository.
Note that the user interface process is written in Python 3. This proved problematic because the Adafruit library for the LCD was written in Python 2. I found the source code for the library (https://github.com/adafruit/Adafruit-Raspberry-Pi-Python-Code/tree/master/Adafruit_CharLCD). I did a 'git' of the code. I was able to make it work immediately with a Python 2 program. I then iterated thru modifying the library to work in Python 3, making sure GPIO could be shared, and removing dependencies on other Adafruit libraries that were unneeded for my project. The result of that effort is the class in testLCD.py.
The shell script armUI.sh starts the user interface process. The process runs the Python program uiServer_5.py. Note that the script expects the program to be in a particular folder. You can put the program somewhere else, but you must then modify the script.
uiServer_5.py creates an instance of a helper class (SendingServer.py) to deal with all of the network interaction, i.e., waiting for a command and returning responses. uiServer_5.py only has to deal with interpreting commands, displaying messages, initiating message blinking, and waiting for button presses.
uiServer_5.py also sets up a blinker thread (blinker.py) using a class that subclasses the Episodic_Thread (episodic_thread.py).
Step 9: Image Analysis
As described in the introduction, a key aspect of the robot is the ability to analyze an image of the game board and determine where the human player has placed a token during a game. As mentioned above, I wanted to use OpenCV in a Java environment to perform the image processing. I also wanted to do development in Eclipse. This document shows how to get OpenCV installed in Eclipse.
Remember the game board consists of what amounts to a 3x3 array of cells. The game board (see the first picture) has "registration marks" along the outside boarders. At any time, each cell can contain:
- nothing, meaning neither the human nor robot has placed a token in that cell
- a + meaning the human has placed a token in the cell
- a O meaning the robot has placed a token in the cell
It is pretty simple to detect the difference between an empty cell and a non-empty cell using circle detection. I tried many different approaches to detect the difference between a + cell and a O cell, including corners, lines, circles, and color. My experimentation showed that detecting first the registration circle and then the disc circle and then the presence or absence of the circles from the O inside the boundary of the disc was the most reliable approach.
Analysis proceeds as follows:
- Crop the full image from the camera (see second picture) so that the remaining image contains only the game board with some border; the resulting image is roughly 625x625 pixels (see third picture).
- Find the the corners that delineate the cells in the game board (16 corners).
- Using the corner information, crop out each of the 9 cells, leaving only registration circle and any token on a "white" background. The fourth, fifth, and sixth pictures show the appearance of a cell containing a + token, a cell containing a O token, and a cell containing no token. Note the blue tint in the "white" background in the third thru sixth pictures; this indicates a sub-optimum white balance for the camera.
- Detect the content of each cell using circle detection
- find the registration circle (always present)
- find the circular boundary of the token inside the registration circle, if such exists; this discriminates between an empty and non-empty cell
- when a token exists, look for the interior circle of a O token; this discriminates between a + and O token
The Java code for image analysis can be found in a git hub repository. The code is organized in what amounts to package in an Eclipse project (project ArmPrime, package org.gaf.ttt.image_analysis).
The class TicTacToeAnalyzer, given a captured image, performs the analysis as described above. It uses a helper class, CellTypeDetectorCircle, to perform the details of cell content detection.
The second helper class, CellTypeDetectorColor, shows a different approach that looks at the color at the center of the token. In theory, it should be easy to discriminate red from black, but there are many factors that make it less reliable, like lighting and the resulting color saturation (e.g., how does one know that the average level of red is "much higher" than the average level of green and blue?). You may find the magic to make this work.
Step 10: Game Playing Logic
The goal for the robot is to never loose, even when always moving second. There are many sources for strategies that achieve that goal. I found this one particularly helpful.
The Java code for game playing logic can be found in a git hub repository. The code is organized in what amounts to two packages in an Eclipse project (project ArmPrime, packages org.gaf.ttt.common and org.gaf.ttt.tictactoe).
The class TicTacToeGamePlayer contains the logic to play the game. The class has public methods for recording the move by the human (makeOpponentMove), getting the state of the game (getBoardState), and checking for a winner (checkForWinner). There a few private methods, the most important of which is planRobotMove; it contains the logic for determining the robot move based on the current game state.
TicTacToeGamePlayer leverages a helper class TicTacToeGameBoard, which is also used by other classes. The class holds the game board state. It has public methods to record a move to a particular cell (setCell), find an empty cell (findEmptyCell), and others.
Step 11: Putting It All Together
The picture shows the overall architecture of the game playing code. The Mac component provides overall control for playing tic tac toe. The Mac component interacts with the Wifi connected Camera Pi and UI Pi components that were described earlier.
The Mac component runs a single process (P in the picture) with two threads (T in the picture). The main thread provides overall game playing logic and orchestration of the robotic arm (the robotic arm code was described earlier, and is not shown on the architecture diagram), the user interface, and the camera. The main thread interacts with the camera Pi via the image digester thread, which in turn interacts directly with the camera Pi. The use of two threads allows transmission of the captured image to take place in parallel with arm motion, improving overall performance and the user experience.
The main thread
- Initializes various components and classes, e.g. the robotic arm, an image analyzer. NOTE: It creates an instance of the game board helper.
- Moves the arm to a neutral position to allow the human to set up for a game. This means clearing the game board of tokens and placing the robot's tokens in the yard.
- Loops while playing a series of games:
- Initializes an instance of game playing logic (NOTE: it creates another instance of the logical game board)
- Loops while play a single game:
- prompts the human to move
- requests an image (and analysis) via the image digester thread; analysis of the captured image detects the + and O tokens, blank cells, and even analysis errors; this produces a physical state of the game board
- picks up a robot token from the yard (in parallel with image transfer and analysis)
- compares the game boards from the game player (logical state) and the image digester (physical state) to determine the move made by the human; it can also detect other forms of errors, such as mistaking one token for another, or cheating by the human
- determines the robot move (cell into which to place the robot token); this updates the logical state of the game board
- drops the token held by the arm in the appropriate cell to make the robot's move
- checks for a win or a draw
- displays the appropriate prompt to the human (win, draw, continue to move)
The image digester thread, after initialization, runs in a loop:
- Waits for a command to capture an image
- Sets its state to 'running' so that the main thread knows what is happening
- Waits for the size of the image to be returned
- Sets its state to 'got size' so the main thread knows it can now move the arm
- Waits for the image data to be returned
- Sets its state to 'got image'
- Analyzes the image, which produces a physical description (state) of the game board
The Java code for game playing logic can be found in a git hub repository. The code is organized in what amounts to two packages in an Eclipse project (project ArmPrime, packages org.gaf.ttt.common and org.gaf.ttt.master).
The main thread in the Mac component runs the Java program TicTacToeRobot. It implements the overall logic described above. It uses a helper class SocketCommunicator to interact with the user interface Pi and the camera Pi (beeper).
The image digester thread runs the class ImageDigester. It implements the image processing logic described above. It uses a helper class SocketCommunicator to interact with the camera Pi (image capture).