Introduction: Pointillist Painting Robot Arm
It may not be a Da Vinci or Kahlo, but this painting robot packs quite the artistic punch for it’s size. In this Instructable, Ill show you how I went from concept to completion and built a painting robot arm using an Intel Galileo, a camera, and handful of electronic parts. I’ll share the techniques I used so you can build your own 4 degree of freedom of expression robot.
Step 1: Parts and Materials
Intel Galileo (I used a revision 1 model, but Gen 2 is out!)
uCam-II (raw/JPEG TTL serial camera module)
(4x) standard servo
5V 2.5A power supply (included with Galileo boards)
5V 4A power supply
(2x) SPST switch
momentary push-button (panel-mount)
10 K ohm linear potentiometer (panel-mount)
1/4" D knob
(11x) 10mm diffused white LED
(10x) M12 bolt
(12x) M12 washer
(14x) M12 nut
(8x) M5 for servos
(8x) M4 for assembly
(8x) M3 mounting electronics
(8x) M2 mounting electronics
clear shot glass
(2x) 24" x 36" x 1/4" plywood sheet
(2x) white spray paint/primer
heat shrink tubing
stranded wire (22 gauge)
Old-fashioned Dymo label printer
Step 2: How to Make a Painting Robot
All files (Galileo code, Processing code, vector paths, and .stl files)are included on this page so you wont have to search for them later.
This step will cover the broad concepts as they apply to the system as a whole, while the following steps will go into the initial design and final construction of the various sub assemblies.This is a really big project, physically small, but conceptually huge. On a scale from one to ten in order of complexity, I rate this a solid nine. In order to break this project down into achievable tasks, I first set clear goals and design constraints to make it attainable. The very first task I gave myself was to define the overall project goal:
make a robot that paints
This leads to two questions: What kind of robot? What kind of painting?
make a robot arm that paints acrylic on an upright canvas
What kind of robot arm? How is the canvas held? And so on.
make a 4 degree of freedom arm that takes a photo and then paints the image on the canvas
I continued down this line of questioning until I couldn't reasonably reduce the design further. After some time, I came to the following design constraints and goals (in no particular order):
- fits on a desk
- has 4 degrees of freedom
- does not require any adhesive to assemble
- accepts an ordinary canvas
- canvas is removable
- takes a photo
- paints said photo
- accepts an ordinary paint brush
- brush is removable
- paints using a single color
- retrieves paint from a palette
- cleans itself off in a water cup
- runs on a microcontroller
- uses a minimum number of power supplies
- uses a minimum number of electronic components
- uses inexpensive motors
- has a minimal user interface
- has easily accessible electronics
While there are many robots that have been used to make paintings, (you can even buy a water color painting robot kit!) they often take more mechanical postures: polar plotters, delta arms, XY gantry plotters, multi-degree-of-freedom arms that paint on horizontal surfaces etc. I wanted to make an industrial-styled machine that resembled the more organic form of a traditional portrait painter using a canvas.
Step 3: Designing a Robot Arm
Before designing the other systems, I planned out the mechanical aspects of the robot, since the arm itself dictates the requirements for most of the other components. I browsed around and looked at many different robot kits and even a few professional robot arms. I then chose to design and build my own arm "from scratch" for several reasons: it was cheaper, I wanted to learn more about the design and construction of a simple arm, and it allowed me to have full design control, which would make the maths necessary for motion planning less difficult to solve.
I decided upon a 4 degree-of-freedom design as this is the minimum arrangement necessary to paint the upright canvas while still allowing the arm to dip the brush in the paint and water cup, which are both along the same plane as the base of the robot. With this in mind, I made a fairly detailed model of the machine using Autodesk's Fusion 360 (which is a great program, free for use for hobbyists while still having a ton of features!).
In order to design the components that would attach to the motors, I first took very fine measurements of one of the servos and created a 3D model. I decided on having two struts extending from each motor as this would prevent most twisting as the arm moved. I have access to some very accurate 3D printers at work, so I was able to design the struts at direct servo horns and caps as replacements for the base cover of each motor. I wanted to learn more about the design of robot arms, so using the tools I had made the most sense in my case. When making your own arm, you really just need to know the dimensions and constraints of your motors. I designed the struts to be fairly short. This serves two purposes: to reduce the load on the motors and reduce the time to print each individual piece. I designed the struts to have triangular cutouts throughout, saving material without sacrificing structural integrity. Having complete design control also allowed me to create the arm such the tip of the brush (my end effector) is completely centered with my base axis. This will make the mathematics for controlling the arm less complex (more on this later).
I designed the frame of the robot to be made from laser-cut pieces of 6mm plywood, as a result, the frame consists of entirely flat pieces that have holes for structural assembly and component mounting purposes. To get the paths from my model, I selected the individual frame objects and exported them as paths. The frame is designed to accommodate a 9" x 12" x 1/4" canvas.
Step 4: Designing the Base
The base is the simplest part of the mechanical structure, but provides many valuable functions. Four small slots toward the rear the top plate allow the easel assembly to slide into place. The holes have no offset, so the easel fits snugly, preventing any lateral movement. The weight of the easel and the friction of its tabs in the slots prevents it from popping out. The support ring towards the front of the plate is secured by four M12 bolts and elevated by M12 nuts. The ring serves to make a thin lip just below the base disk of the arm, this keeps the platform from wobbling without creating too much friction for the base servo to overcome as it rotates. Four holes toward the front side of the top plate allow the attachment of the miniature palette and water cup holder. These accessories bolt into place with four M4 screws and allow them to be modular should I later decide to swap them out or move them. Below the face plate are seven wooden spacer plates that slot into matching rectangular holes around the edge and interior of the base plates. These create a gap big enough for the base servo to sink down, aligning the base of the arm closer to the base of the canvas, and hide part of the servo cables. The arm cables run down and into a large hole before the canvas and exit through and equal hole inside of the easel tower. Finally, the bottom plate attaches to the the spacer plates. Six M12 bolts and nuts, with a washer each above the face plate and below the base plate tighten the base assembly together. These combine to add a decent amount of weight, which helps counteract the movement of the arm as it swings around (also they look cool).
Step 5: Designing the Easel
I designed he easel box to serve several purposes. Aside from holding a canvas at a fixed distance and angle from the arm, it provides a mounting space in order to conceal the electronics. The Intel Galileo and PWM driver board are bolted to the rear side plate with M3 and M2 machine screw respectively. The forward facing plate provides a mounting space for the user interface electronics and the camera. The camera is attached via four M2 screws toward the top. The holes for the LEDs are slightly inset so that they can pop into the panel with friction fit. The user button and knob are panel-mount, so they pop into their respective holes and tighten with a nut each. The two power switches are recessed and upside down to allow them to be easily turned on and prevent any accidental shutdown (which will harm the robot). The rear of the easel is a door with a hinge towards the top.
Step 6: Hardware: Arm Assembly
Here you can see the up-close assembly of the arm itself, which, in addition to the servomotors, consists of ten custom 3D parts. I designed the arm such that it too is simply bolted and screwed together. I replaced each end cap of the standard servos with a strut piece, allowing me to reuse the original screws to fasten them to the servos. The tip of the arm has a hole for accepting a brush up to 15mm in diameter. The brush is fastened by three screws that tighten into the shaft, each offset by 120 degrees. The small rectangular cross braces are fastened to the struts by M3 screws on either end.
Step 7: Design Overview: Electronics
The Galileo is the heart of this system, not only because it runs the show, but it is also the largest individual electrical component. I designed the electrical system in such a way that it uses male to female header connections to join nearly every component or module. With this scheme I was able to minimize the electronic footprint (which usually manifests in large protoboard pcb) and additionally allow for quick assembly and rewiring during setup and testing of the individual components. Ultimately, I did not use all of the components assembled upon the final run of the robot.
The attached layout is more block diagram than true schematic since I most of the components used are complete modules (the Galileo, PWM driver, camera, and servos). The discrete components I chose aren't really essential to the operation of the robot, although they did make debugging a bit more visually pleasing.
The system runs off of two 5 volt power supplies: one for logic and one for the motor supply. Since the Galileo would be doing a lot of precise processing, any electrical noise from the motors wouldn't be acceptable. The camera is fairly low power and each LED has a 1K ohm current limiting resistor, so the logic power draw should be fine from the 2.5A unit supplied wit the Galileo. The motors are given a beefy 5V 4A supply so there's almost no chance of them running low on current.
The main "user interface" really only consists of a potentiometer and a momentary pushbutton. I chose a rather beefy panel mount button since it made a solid click when pressed. The potentiometer is useful for calibrating the motor positions before the final program.
Step 8: Design Overview: Software
The graphic above lists the ultimate process of interaction between the user and getting a photo. Of course, this is vastly oversimplified, but it covers the core of what the machine does. In order to reach this point, we'll need to run three programs: motor control capture, target point pre-processing, and main run time code. I heavily commented all of my code, so
Motor Control Capture:
I used regular hobby servos, which means I couldn't receive any direct feedback in regard to their position, additionally, it's not possible to align the output gear perfectly to the horn. We'll need to find the pulse lengths that correspond to the desired servo position, visually confirming that the servos moved properly.
The math required to compute the location of the desired position of the robot arm is very memory intensive, and requires use of a full desktop computer. You can read more about this on the Simulation step.
Once we've run though the two previous programs, the Galileo is now capable of running through its full painting sequence. This code waits for the user to press the trigger button, whereupon the system will take a photo, process said photo, and then use the processed data to determine a motion path sequence to paint the pixels on the canvas.
Step 9: Two Dimensional Forward Kinematics
In order to make the robot paint on the canvas, we'll need to figure out how to make move in an acceptable way, that is, extend the arm in such a way that the brush tip is neither too far away, nor trying to reach through the canvas. We move the arm by sending pulses (that correspond to angles) to the motors, but what angles do we choose and how do we know where the arm and brush are? These questions lead us into the fascinating world of kinematics. According the the omniscient web entity Wikipedia:
Kinematics is the branch of classical mechanics which describes the motion of points, bodies (objects) and systems of bodies (groups of objects) without consideration of the causes of motion.
Now, there are two kinds of kinematics: inverse and forward. Inverse kinematics is the more useful of the two as it allows us to have a given point, and then determine motion of a body (just the arm in this case) required to reach said point. Simple, right? Unfortunately not. For any given inverse kinematic (IK) equation, there may be many or no solutions to a given problem, and the complexity of the equation increases quite a bit with each additional degree of freedom given to the system. Solving IK equations requires a strong knowledge of linear algebra and there are several different ways to implement the math too. This is a bit overwhelming, so we'll stick to the more palatable forward kinematic approach. Forward kinematics allows us to determine the position of points of the body in space, given to position of the individual joints. Since we can define the angular position of the motors, we can determine where in 3D space the tip of the brush is.
Another benefit of FK is we'll only need a good understanding of trigonometry to solve the equations. Before jumping into full 3D space, let's look at determining the position of a single point, given a single angular input. To begin, let's draw a point A at the XY origin (0, 0). Point A represents the axis of rotation for motor A (the shoulder). The strut that extends from motor A to the the shaft of motor B is a fixed length, we'll call this line segment L1. So how do we find the location of point B, given the angle thetaA? Using L1 as the radius of a circle about point A, we can find the Cartesian coordinates of point B with ( X equal to L1 x cos(thetaA) and Y equal to L1 x sin(thetaY). We'll want to work in Cartesian coordinate space since the canvas is ultimately a set number of points on a plane. Given point B and thetaB, we can now find point C. For the X value of point C, we'll multiply L2 by the cosine of (thetaA + thetaB) and then add this value to the X value of point B. For the Y value of point C, we'll multiply L2 by the sine of (thetaA + thetaB) and then ad this to the Y value of point B. This pattern extends to find point D (the tip of the brush), the equation of finding which is shown in the third graph.
The final image you see above is the bulk of my notes while figuring this out. There are many articles about kinematics online, however, these often jump immediately into physics and more complex explanations for calculating this data. We're making an artistic robot, not a precise assembly line work-bot, so let's keep this simple!
Step 10: Three Dimensional Kinematics
While the upper arm only moves along a 2D plane, the base servo rotates all of those points into 3D space. In order to calculate the new coordinates of our joint points, we'll need to flip our perspective. Looking from the top down, the arm extends positively along the X axis. We've now got a new polar space where the radius of the circles for each point is equal to the X coordinate of each point on the XY plane. We'll simply redefine the X values as the cosine of thetaBase multiplied by the length of L(N)Z (N being the point we want to find. We can find the Z value by multiplying the sine of thetaBase by the length of L(N)Z. With everything put together, given angles thetaA, thetaB, and thetaC we can now find the 3D coordinate of point D. This completes our simple forward kinematic math. Now to put it to use...
Step 11: Creating a Simulation
I'm a pretty visual person, so once I needed to graph in three dimensions, I turned to Processing to help me see the arm and run the pre-processing math to figure out the shapes the arm needed to assume in order to paint the canvas. The simulator does two things: calculate the coordinates of our target canvas points and determine what angles are valid for the arm to reach those points (which are definite values we can send to the motors to assume the desired shape).
Defining the Canvas
In reality, the canvas is a fixed boundary within a plane. We need to define fixed points within that boundary, so that we can come to a finite total of points to paint. Since the image we request from the camera is 80 by 60 pixels, let's constraint our target area to 60 by 60 points spaced 3 millimeters apart for a total of 3600 possible targets for the tip of the brush. Even though there are many points, we only need to calculate 60 coordinates for each axis. Why? The "rows" in our canvas area share the same X and Y values, but have different Z values. The "columns" in our canvas area all have the same Z value, but different XY values. With all of this in mind, we can determine all possible target points with only 60 values per coordinate. The exact details of this are more clearly explained my Processing sketch.
Learning How to Move
Now that we have a defined array of canvas points, how can we move the arm in such a way that it reaches those points? This is where the compromise of choosing forward kinematics leaves us with more work than IK would have. Since we know the location of the point, we really only need to accurately move the arm to the point, not precisely. This boils down to accepting a small amount of error in terms of where the brush tip is and where the actual target point is. Cycling through all possible configurations of the arm given a set angular resolution, we can check to see of the distance between our brush point and target point is below a certain threshold, if so, then we're close enough to say that the robot is actually reaching its goal. Although we'll ultimately move the arm in onedegree measurements, this is too fine a resolution to test the arm since the number of floating point equations would be in the millions i.e. too much work for your computer. In order to speed things up I set the simulation to start at a much sloppier degree of resolution (10 degree increments for instance), and then ran though all possible configurations, testing the distances at each step. Cycling though all points, if a match is found at the current resolution, then the values for the motor angles are stored for that coordinate point (0 to 3599). Each motor angle is constrained based upon physical limitations and stylistic reasons (I set it such that the arm always makes a concave shape with respect to the canvas). If no matches are found at the given angular resolution, we'll increase our resolution (decrease the angular steps) and then cycle through the points again, but only testing shapes for points that haven't already been marked as having been found at the lower resolutions. Even at a modest resolution and an acceptable distance to target of 1.4 millimeters, 3600 points proved to take too long, so I settled on a new resolution of 24 by 24 points centered within our earlier canvas grid, for a total of 576 desired arm shapes. After letting the simulation run for an hour and a half, it completes and outputs a .txt file that contains floating point values for the angles all formatted as an Arduino styled two dimensional array. This .txt can now be reformatted as a header file that can be included in the Galileo's sketch.
Step 12: Basic Image Processing
Image input can either be manually hard-coded by the user or taken from the camera data. Although the uCam is capable of taking RAW color photos, this amount of data is unnecessary for generating the binary array that the Galileo will use to determine which "pixels" to paint. I configured the camera to take a grayscale image with an 80 by 60 pixel resolution. The data from the camera comes in single bytes, which makes handling the image much easier to process (4800 bytes in all).
Here's the program flow from taking a picture:
load image data into a simple array
cycle though the new array
if the value is above a certain threshold, convert it to a maximum value (contrast all the way up!), else reduce it to 0
crop the image to 60 by 60 pixels
load the cropped image into a two dimensional array
take the average value of a 4 block square section of the array
if the average is at least half dark, then set the pixel of a new smaller array (30x30) to be dark also
crop the new small array to be 24 by 24 pixels
The image has now been reduced down enough to be used as input for the motion planner.
Step 13: Software: Motion Planning
With our table of angular values and our pixel data from the camera, it's now time to paint! How should the arm move though? Even though the servos are relatively fast, simply setting them to the next desired shape would result in too abrupt of a motion. Resolving this requires interpolation between the two target points at a set speed. I organized the motion sequence into a series of shapes and actions stored in an array that I refer to as the motion queue. Here's how the queue is set up:
after the photo has been processed, load a sequence of shapes (set of 4 angles) to move the arm into an upright position
load the path to reach the paint palette
load the paths to return to the upright position
cycle through the image array
check to see if a pixel is dark or light
if a pixel is dark then load the matching shape (the output array from the Processing sketch) into the queue
load a shape equal to those angles, but tilt the elbow angle back a bit (so that the arm won't drag between pixels when interpolating)
if thirty shapes have been loaded into the queue (15 presses of the brush to canvas and 15 times leaning back)
load the path sequence for retrieving more paint and return to a very leaned back posture (interpolating from the upright position also drags though points)
once the cycle has complete, load the path sequence for "going to sleep" in the water cup
Once our queue is full, we'll need to execute it. This step consists of running through our list until we reach an empty value and then going back to checking button presses. Each step while running through the queue interpolates between shapes, so that is has more fluid motion.
In order to smooth the motion between shapes, we'll interpolate between the two angles at 1.0 steps, but variable times per joint, so that the joints all arrive at the desired shape at the same time.To do this, we'll first calculate the change between the two angles for each joint (the deltaTheta per joint). After finding all of the deltaTheta values we'll sort them to find the largest. Then we'll multiply the largest deltaTheta by the minimum amount of time the motor can travel one degree (this is the minimum step that we'll set). This will set the total time such that the motor that has to move the most, doesn't move any faster than it can travel one degree per degree, while the motors that have lower deltaTheta values, will update by one degree much slower, since their total travel distance is so small. Calculating the time based on the greatest angular change also means that the arm moves no slower than if a fixed time had been set, since the change between shapes might be quite small. The full implementation of the interpolation function is heavily commented in my Galileo sketch.
Step 14: Using the Robot
We're finally ready to paint!
Operating the robot is quite simple. Turn on the "Logic" and "Servo" power switches, wait for the "System Ready" light to turn on, and then press the "Action" button at your leisure. An LED near the camera will blink alerting you that the system is about to take a picture, it then remains solid while the picture is being taken. The "Processing" LED will then illuminate while the machine processes the photo and loads the motion paths into the queue. The "Painting" LED will then illuminate while the robot executes its motion plan. Once the path is complete, the arm will return to the water cup and wait for the next button press.
Step 15: Going Further
This has been an amazing project to build, and I can wait to build more like it. I've gained a great appreciation for the immense amount of work that goes into making a machine move in what are (to us fleshy humanoids) trivial motions. I hope you may have gained some insight into the process of designing a robot arm and use what I've share for whatever you like! Share in the comments if you enjoyed this and if you'd like to see more in depth explanations of any of the concepts that I covered. Thanks for reading!
We have a be nice policy.
Please be positive and constructive.