This Instructables will teach you how to create a system to automate material testing. All the code needed is provided so no previous coding knowledge is required but will help in understanding how it works.

It was initially built for a project which looked at improving rigidity in 3D printed thermoplastics. This was achieved by informing the material allocation process from a structural analysis (FEA). This setup enabled me to test 80 samples, making for a total of 2,400 measurements in about 6 hours.

This tutorial can be broken down into four sections:

  • Making the rig: For this step, you'll most likely want to get yourself in a workshop as you'll need a welder, a drill and a 3D printer.
  • Setting up a camera: From here on, all you'll need is a computer and a webcam. This section will help you in connecting a camera and tracking a single colour.
  • CSV read/write: Here we'll cover the basics of reading and writing data to a CSV file
  • Assembly: Finally we'll put everything together in a working system

At the end of this Instructables, you will have built a system that is able to automate deflection measurement by tracking a marker on your sample and saving it to a CSV file for analysis in Excel.

The shape of the sample used in this project was an arbitrary choice loosely resembling a standing man. It is, however, symmetrical so if we see a large discrepancy then we know there's an anomaly in the sample.The jig and end effector were designed based on this shape, matching as closely as possible the analysis environment. Where the force was applied at two points on the top and the bottom surfaces were fixed in all directions. You may wish to use the same or you can create your own by customising the jig and end effector to match your requirements

I tested the repeatability of the system which is illustrated in the graph above. Three different samples were tested three times, the results shows good repeatability after the activation period (<1,000gr). By using a newer webcam this would improve. Furthermore, if you're interested in perfecting the marker detection may I suggest you look at blob detection. It gives a minimum and maximum target size so we don't average similar pixels from the entire screen.

Step 1: Building the Rig

Material List:

  1. 3D printed parts (9 parts)
  2. 16x16x240mm steel section, 1.2 mm wall thickness (2 parts)
  3. 50x10x240mm steel section, 1.2 mm wall thickness
  4. 40x5x140mm solid steel section
  5. M3 Bolt & Nut (6 parts)8mm Linear Bearing
  6. 8mm Stainless steel rod
  7. Set of weights, in this case, we used 6 pieces of a solid steel section (50mmx10mm) cut into pieces weighing 455gr each

The rig consists of a basic "U' shape frame made from three steel parts welded together (2 & 3). When welding these it is important to keep the verticle sections (2) parallel to one another so the horizontal guide can move freely.

The guide which is made from a solid steel section (4) has to be drilled at the centre to allow for the linear bearing to fit through. Depending on the type of bearing an additional 4 holes must be made to attach it to the steel part using the M3 screws and nuts.

The guide is then attached to the main frame using the 3D printed connectors. A nut must be placed in each of these before inserting the M3 screw (5) as illustrated in the general assembly diagram. It is not necessary to glue the horizontal section to the connectors, we want to allow it to move in case the two vertical sections aren't perfectly aligned.

Once the guide is fixed toward to top, place the rod through the bearing and into the end effector. Make sure the are no sharp edges on the rod as it can easily damage the bearing. Then press the weight holder onto the top of the rod. Both parts can be held onto the rod using a few layers of masking tape as we may want to replace them later.

The final step is to insert the 3D printed caps into the end of the tubes, these can be glued in using some 2-part epoxy glue.

Ignore the Arduino in the picture above, I used it to gain visual feedback of the pressure using a Flexiforce sensor but it's not necessary and we'll bypass it for this tutorial. In case you wish to do so, I've left the slit in the end effector to insert a sensor at the contact point of the rod.

Step 2: Setting Up a Camera

If you haven't got Processing installed then that's the first thing you'll need to do, it can be downloaded here.

For anyone unfamiliar with processing then it should be said the program consist of two main functions setup() and draw(). Setup will be executed once at the start of the program, it is in here that we can initialise all our variables. And Draw which is executed up to 30 times per second depending on the size of the program.

Once processing is installed, plug in a camera to your computer via USB, start a new sketch and copy the code below. When you run it for the first time, you'll see a list of serial devices connected to your computer printed out in the console, providing your webcam is correctly connected find the number on the left of it and enter it instead of the "0" in the following code.

<p>import processing.video.Capture;<br>Capture c;
void setup() {
  size(960, 480, JAVA2D);
  String[] cams = Capture.list();
  (c = new Capture(this, cams[0])).start(); // Replace 0 by the index of your webcam printed in the console.
  while (c.width == 0)  delay(10);
void draw() {
  PImage img = c.get();
  img.resize(0, height);
void captureEvent(Capture capture) {
  redraw = true;

Now we have the live view we can access the values of each pixel in every frame stored in the "cam" variable. The variable type "color" stores the RGB and we can retrieve each integer using bit-shifting which is very fast, as shown below.

color col = cam.pixels[i]; // Retrieve the pixel colour at index i
int curR = (col >> 16) & 0xff;  // Get red value
int curG = (col >> 8) & 0xff; // Get green value
int curB = col & 0xff; // Get blue value

The final step is to define a tracking colour, find the matching pixels within a given threshold and average their position. This is done in the follow piece of code which is ready to execute. Once you've clicked on the object you want to track it will save that colour and look for it in our image. To achieve best results you should choose a colour which is vivid and distinct from the rest of the image, providing these two conditions are met it works well. I suggest you use coloured dot stickers placed on your samples

The function findmarker() does most of the work and returns a PVector which is a variable type used to store XYZ coordinates. In this example, we only need x and y. This vector is used in draw() to display an ellipse in the centre our tracked object.

// Charles Fried - Twitter: @pencil_stroke
// Color tracking
// Instructables

import processing.video.*; 
Capture cam; 

int threshold = 40;
color target = color(255, 255, 255);

int selected = 0;

void setup() { 
  size(720, 450);
  cam = new Capture(this);

void draw() { 
  if (cam.available()) { 
    // Reads the new frame
  image(cam, 0, 0, width, height); 

  if (target != color(255, 255, 255)) {
    ellipse(findMarker().x, findMarker().y, 10, 10);

void mousePressed() {
  target = get(mouseX, mouseY);

PVector findMarker() {
  // we used these variables to keep track of the matched pixels
  int countX = 0;
  int countY = 0;
  int pixCount = 1;
  // now we split the pixel target color into r,g,b
  int targetR = (target  >> 16) & 0xff;
  int targetG = (target  >> 8) & 0xff;
  int targetB = target  & 0xff;

  for ( int x = 0; x < width; x++) {
    for ( int y = 0; y < height; y++) {

      color current = get(x, y); // get the color of the pixel
      // now we split the pixel color into r,g,b
      int curR = (current >> 16) & 0xff;
      int curG = (current >> 8) & 0xff;
      int curB = current & 0xff;

      // if the pixel is more or less like the target then add its coordinates
      if (curR <= targetR+threshold && curR >= targetR-threshold 
        && curG <= targetG+threshold && curG >= targetG-threshold 
        && curB <= targetB+threshold && curB >= targetB-threshold) {
        countX += x;
        countY += y;
        pixCount ++;

  // divide by the total number of matched pixels and return
  return new PVector(countX / pixCount, countY / pixCount);

Step 3: CSV Read/write

Now let's put our colour tracking sketch aside and look at how we can create a CSV file and write to it. We'll cover all the functions that are used in the final sketch.

The first thing we need to do is create a variable, thankfully processing makes it easy for us by providing a variable type called "Table" which contains a number of functions to handle CSV files. Is it declared under the arbitrary name "results" as illustrated below. All the functions associated with Table can be found here.

Table results;

Then, we can initialise it in Setup, if the file name isn't found in the sketch's data folder then a new one will be created.

results = loadTable("results.csv");

To keep track of how many columns and rows the CSV contains we can use the two functions shown below which simply return an integer.

results.getRowCount() // returns total number of rows

results.getColumnCount() // returns total number of columns

Similarly, we can add a column or row using the add function:



Finally, to write to the file we can use the set function, the first argument is the row number, the second is the column number and the third is the data we want to register. Bear in mind that in code all the indexing starts at "0", so if you want to access the second column then you'll want to enter "1".

results.setFloat(row, column, value); // Used to register a decimal value

results.setString(row, column, value); // Used to register characters

Step 4: Assembly

To put everything together I want to introduce the unfamiliar reader to the switch function which will also help in illustrating the structure of the program.

As demonstrated below the code is segmented into five steps, from "A" to "E". The current step is saved by the "programState" variable which is changed to the next letter when the condition for that step is met. When "programState" matches the case label in switch then the code within it gets executed.

<p>Char programState = 'A';<br>void setup(){
void draw() {
  switch(programState) {
  case 'A':
    // Allow the user to enter the name of the sample
  case 'B': 
    // Allow the user to enter the weight of the sample
  case 'C': 
    // Allow the user select left marker
  case 'D':     
    // Allow the user select left marker
  case 'E':     
    // This is where the testing happens

In the attached pictures you can see the layout of the CSV files. You can find this template file in the data folder of sketch attached, all you'll need to do is to replace the accumulated weight of the blocks which are used to apply some force to your sample. In this case, we had six blocks each weighing 455gr. If you decided you don't need these then clear the sheet and save it, the program will still work as intended.

To get started place the webcam fairly close the rig whilst making sure your sample is in focus. Preferably, everything should be fixed to the table as any movement would void all previous samples. Run the program and enter the information required and press ENTER to go to the next step. The program will keep looping through the steps so you can keep changing the sample. In the testing window, there will be two buttons, WRITE and RESET. Write saves the current maximum deflection to the CSV. Reset changes the maximum deflection to the current deflection this can be useful if you make a mistake whilst placing the weight. You can also save a picture at any point by pressing the space bar.

If you have any question please feel free to leave a comment or you can find me on Twitter where I am most active under @pencil_stroke.

<p>Awesome instructable! Thanks for sharing :)</p>

About This Instructable




Bio: Working at the intersection of Design &amp; Technology | Computational Designer
More by CharlesF81:Automated Deflection Measurement 
Add instructable to: