Introduction: Quipt: Taming Industrial Robots

Industrial robots are truly incredible CNC machines –– not just for their speed, power, and precision, but because they are also highly adaptable. Unlike other CNC machines, when you put a tool on the end of the robot, you completely transform what it can do: put a sprayer on it, and it becomes a painting robot; put a gripper on it, and it becomes a material handling robot; put a welder on it, and it becomes a spot welding robot.

This adaptability has made the industrial robot a key piece of infrastructure for factory automation over the past 50 years. But despite their adaptability, industrial robots are fairly dumb machines: they have little-to-no awareness of the environment outside of their programmed tasks. This is one of the main reasons why industrial robots have thrived only in highly controlled environments, like factories. They need places where unpredictable objects (a.k.a. people) are strictly separated from their work zones.

But an industrial robot's adaptability is useful beyond the factory. Putting a film camera onto an industrial robot gives a director precise, complex, and repeatable camera moves. Putting a loader onto an industrial robot gives a construction worker a way to move heavier quantities of materials. Putting a light onto an industrial robot gives a photographer more precise control of a scene's ambiance. While these are somewhat mundane use cases, they tease out some of the biggest challenges for bringing industrial robots outside of the factory: because they are blind to the world, they are very dangerous to use; because they need highly technical skill to program, they are very difficult to use.

For the Pier 9's Fall 2015 Artist in Residence program, I decided tackle these two challenges and build a way for industrial robots to be safer and easier to use in uncontrolled settings. I created Quipt, a gesture-based control software that gives industrial robots spatial awareness and spatial behaviors for interacting closely with people.

P9 AiR Profile: Madeline Gannon from Pier 9 on Vimeo.


Find out more on how Quipt was made in the few next steps...

See the full project page here.

Step 1: System Overview

Quipt is a gesture-based control software that facilitates new, more intuitive ways to communicate with industrial robots. Using wearable markers and a motion capture system, Quipt gives industrial robots basic spatial behaviors for interacting closely with people. Wearable markers on the hand, around the neck, or elsewhere on the body let a robot see and respond to you in a shared space. This lets you and the robot safely follow, mirror, and avoid one another as you collaborate together.

Quipt augments an ABB IRB 6700 industrial robot by giving it eyes into its environment. Using a Vicon motion capture system, the software is structured to receive and reformat motion capture data into corresponding movement commands for the 6700. Movement commands are generated using our open-source library, Robo.Op (see it on github). Quipt also visualizes debugging data in an Android app, so a human collaborator has a mobile, continuous view of what the robot is seeing.

Step 2: Motion Capture & Spatial Behaviors

Quipt uses motion capture to track a person working with the robot. Passive markers made from retroreflective tape are worn on the body, and are given a global position and orientation by the mocap system's tracking software. Quipt parses the streaming mocap data in a size and format the robot can handle. Aligning the world coordinates of the motion capture system with the world coordinates of the robot to gives an accurate reference frame for letting the robot 'see' what the mocap system senses.

With motion capture data integrated and the two coordinate systems aligned, the industrial robot now has an awareness of where a person is in space. At this point, Quipt can tell the robot how the person is moving andhow it should move in response. Quipt uses three primitive spatial behaviors to guide the robot's movements: follow, mirror, and avoid. These three movement modes are the basic components of how two people interact with one another while working together in a shared space. Giving these behaviors to the robot provides the human counterpart with an intuitive understanding of the robot's movements and where it is going next.

Step 3: Out of the Factory and Onto the Site

We are excited for a future where industrial robots continue to move out of industrial settings – these new settings bring new design challenges that have yet to be explored in traditional automation. Automation tasks where the human is entirely removed from the equation is reaching a limit of diminishing returns. The next step is create ways for these machines to augment our abilities, not replace them. Reimagining the interfaces that connect us to an industrial robot not only changes how we use these machines, but also has the potential to innovate what we do with robotics arms.