Introduction: Simple Arduino Feedforward Backpropagation Neural Network Motion Tracker

About: I am an American teaching English at Shangluo University, Shaanxi. I like making machines that do interesting but fairly useless things - I call them Quixotic Machines.

This project is a simple application of a neural net to make a motion tracker with only two GP2Y0A21YK0F 10-80cm Analog (must be analog and not digital) sensors and an Arduino Uno. I have used Sean Hodgins neural net code and you can find more specifics on how it works on his instructable here: https://www.instructables.com/id/Arduino-Neural-N...

and Sean got his code for the original code here: http://robotics.hobbizine.com/arduinoann.html

If you have a weak understanding of neural nets then here is a good book that explains how the algorithm in this project works. https://www.amazon.com/Make-Your-Own-Neural-Networ... though you don't need to understand how it works in detail to use it.

The parts count is very low and it is a very quick build.

Arduino Uno

(2) GP2Y0A21YK0F 10-80cm analog IR sensors

(1) micro servo

Servo shield (optional) - this makes connections to Arduino super simple and pain free

This is a great little application of a neural net and you can add more input sensors for a wider detection field. Also these sensors can be gotten with a larger field of depth - up to several meters. They cost more than the typical PIR because you need the analog versions.

Step 1: Build Hardware

Ok, my hardware is a bit of a joke in that I just kluged together junk I had lying around.

1. Basically you need a base board of some kind.

2. A pedestal to mount the two sensors on which in my case was a bit of fiberglass fishing pole hot-glued to a couple of scraps of wood and then to the base.

3. I used a couple of odd servo mounting brackets to mount the sensors onto the pedestal.

4. Another bracket to hold the servo and a long piece of fishing pole bolted to the servo arm and then a ball with a poorly drawn eye on it.

5. Plug the power, ground, and signal of one sensor to A0 and the second sensor into A1.

7. Plug the servo into digital pin 7.

Step 2: Load Arduino Sketch

The following code is all you need to run the neural net and operate the motion tracker. No special libraries required.

You can use the parameters as they are or play with them a bit to see if it makes any operational changes.

Set DEBUG to 1 if you want to see the neural net output values printed to the serial monitor.

When you first set up the hardware you may want to uncomment the testIRSensors() function to be sure both sensors are working.

The code is easily modified to add more sensors.

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// neural net program using sigmoid function and applied to simple movement tracker

// created by Jim Demello, Shangluo University, May 2018

// adapted Sean Hodgins neural net code: https://www.instructables.com/id/Arduino-Neural-Network-Robot

/ tracker is composed of two GP2Y0A21YK0F 10-80cm IR Analogue Sensors

// NOTE: sensors must be analogue to work, not digital

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

#include "Servo.h"

#include "math.h"

#define DEBUG

Servo servo1;

/******************************************************************

Network Configuration - customized per network

******************************************************************/

const int PatternCount = 2;

const int InputNodes = 2;

const int HiddenNodes = 3;

const int OutputNodes = 1;

const float LearningRate = 0.3;

const float Momentum = 0.9;

const float InitialWeightMax = 0.5;

const float Success = 0.0015;

float Input[PatternCount][InputNodes] = {

{ 0,1 }, // right

{ 1,0 } // left

// ( .5,.5 ) // center

};

const float Target[PatternCount][OutputNodes] = {

{ 0, }, // movement on right

{ 1, } // movement on

// ( .5 )

};

/******************************************************************

End Network Configuration

******************************************************************/

int i, j, p, q, r;

int ReportEvery1000;

int RandomizedIndex[PatternCount];

long TrainingCycle;

float Rando;

float Error = 2;

float Accum;

float Hidden[HiddenNodes];

float Output[OutputNodes];

float HiddenWeights[InputNodes + 1][HiddenNodes];

float OutputWeights[HiddenNodes + 1][OutputNodes];

float HiddenDelta[HiddenNodes];

float OutputDelta[OutputNodes];

float ChangeHiddenWeights[InputNodes + 1][HiddenNodes];

float ChangeOutputWeights[HiddenNodes + 1][OutputNodes];

void setup() {

Serial.begin(115200);

Serial.println("Starting program");

pinMode(A0, INPUT);

pinMode(A1, INPUT);

servo1.attach(7); // attaches the servo on pin 7 to the servo object

servo1.write(90);

randomSeed(analogRead(A1)); //Collect a random ADC sample for Randomization.

ReportEvery1000 = 1;

for ( p = 0 ; p < PatternCount ; p++ ) {

RandomizedIndex[p] = p ;

}

Serial.println("do train_nn"); // do training neural net - only takes seconds

train_nn();

delay(1000);

}

void loop() {

unsigned long currentMillis = millis();

//testIRSensors(); // only run this to test sensors without neural net

drive_nn();

}

// just test ir sensors without neural net

void testIRSensors() {

int sensor1Value = analogRead(A0);

int val1=(6762/(sensor1Value-9))-4;

int sensor2Value = analogRead(A1);

int val2=(6762/(sensor2Value-9))-4;

if (val1 > 00) {Serial.print("sensor1=");Serial.println(val1);}

if (val2 > 00) {Serial.print("sensor2=");Serial.println(val2);}

}

//USES TRAINED NEURAL NETWORK TO DRIVE ROBOT

void drive_nn()

{

Serial.println("Running NN Drive Test");

while (1) {

float TestInput[] = {0, 0};

int sensor1 = analogRead(A0); // Collect IR input.

int sensor2 = analogRead(A1); // Collect IR input.

int val1=(6762/(sensor1-9))-4;

int val2=(6762/(sensor2-9))-4;

sensor1 = map(val1, 0, 100, 0, 100); // map to value between 0 and 100

sensor2 = map(val2, 0, 100, 0, 100);

TestInput[0] = float(sensor1) / 100; // make value between 0 and 1 for the neural net

TestInput[1] = float(sensor2) / 100;

//Serial.print("testinput0=");Serial.print(TestInput[0]);Serial.print("testinput1=");Serial.println(TestInput[1]);

InputToOutput(TestInput[0], TestInput[1]); //INPUT to ANN to obtain OUTPUT

int nnOutput1 = Output[0] * 100;

nnOutput1 = map(nnOutput1, 0, 100, 30, 140);

// if (nnOutput1 != 101) {Serial.print("nnOutput1=");Serial.println(nnOutput1);

servo1.write(nnOutput1);

// }

delay(100);

}

}

//DISPLAYS INFORMATION WHILE TRAINING

void toTerminal()

{

for ( p = 0 ; p < PatternCount ; p++ ) {

// SerialUSB.println();

// SerialUSB.print (" Training Pattern: ");

// SerialUSB.println (p);

// SerialUSB.print (" Input ");

for ( i = 0 ; i < InputNodes ; i++ ) {

// SerialUSB.print (Input[p][i], DEC);

// SerialUSB.print (" ");

}

// SerialUSB.print (" Target ");

for ( i = 0 ; i < OutputNodes ; i++ ) {

// SerialUSB.print (Target[p][i], DEC);

// SerialUSB.print (" ");

}

/******************************************************************

Compute hidden layer activations

******************************************************************/

for ( i = 0 ; i < HiddenNodes ; i++ ) {

Accum = HiddenWeights[InputNodes][i] ;

for ( j = 0 ; j < InputNodes ; j++ ) {

Accum += Input[p][j] * HiddenWeights[j][i] ;

}

Hidden[i] = 1.0 / (1.0 + exp(-Accum)) ;

}

/******************************************************************

Compute output layer activations and calculate errors

******************************************************************/

for ( i = 0 ; i < OutputNodes ; i++ ) {

Accum = OutputWeights[HiddenNodes][i] ;

for ( j = 0 ; j < HiddenNodes ; j++ ) {

Accum += Hidden[j] * OutputWeights[j][i] ;

}

Output[i] = 1.0 / (1.0 + exp(-Accum)) ;

}

//SerialUSB.print (" Output ");

//for ( i = 0 ; i < OutputNodes ; i++ ) {

// SerialUSB.print (Output[i], 5);

// SerialUSB.print (" ");

// }

}

}

void InputToOutput(float In1, float In2)

{

float TestInput[] = {0, 0};

TestInput[0] = In1;

TestInput[1] = In2;

/******************************************************************

Compute hidden layer activations

******************************************************************/

for ( i = 0 ; i < HiddenNodes ; i++ ) {

Accum = HiddenWeights[InputNodes][i] ;

for ( j = 0 ; j < InputNodes ; j++ ) {

Accum += TestInput[j] * HiddenWeights[j][i] ;

}

Hidden[i] = 1.0 / (1.0 + exp(-Accum)) ;

}

/******************************************************************

Compute output layer activations and calculate errors

******************************************************************/

for ( i = 0 ; i < OutputNodes ; i++ ) {

Accum = OutputWeights[HiddenNodes][i] ;

for ( j = 0 ; j < HiddenNodes ; j++ ) {

Accum += Hidden[j] * OutputWeights[j][i] ;

}

Output[i] = 1.0 / (1.0 + exp(-Accum)) ;

}

#ifdef DEBUG

Serial.print (" Output ");

for ( i = 0 ; i < OutputNodes ; i++ ) {

Serial.print (Output[i], 5);

Serial.println (" ");

}

#endif

}

//TRAINS THE NEURAL NETWORK

void train_nn() {

/******************************************************************

Initialize HiddenWeights and ChangeHiddenWeights

******************************************************************/

// prog_start = 0;

Serial.println("start training...");

for ( i = 0 ; i < HiddenNodes ; i++ ) {

for ( j = 0 ; j <= InputNodes ; j++ ) {

ChangeHiddenWeights[j][i] = 0.0 ;

Rando = float(random(100)) / 100;

HiddenWeights[j][i] = 2.0 * ( Rando - 0.5 ) * InitialWeightMax ;

}

}

/******************************************************************

Initialize OutputWeights and ChangeOutputWeights

******************************************************************/

for ( i = 0 ; i < OutputNodes ; i ++ ) {

for ( j = 0 ; j <= HiddenNodes ; j++ ) {

ChangeOutputWeights[j][i] = 0.0 ;

Rando = float(random(100)) / 100;

OutputWeights[j][i] = 2.0 * ( Rando - 0.5 ) * InitialWeightMax ;

}

}

//SerialUSB.println("Initial/Untrained Outputs: ");

//toTerminal();

/******************************************************************

Begin training

******************************************************************/

for ( TrainingCycle = 1 ; TrainingCycle < 2147483647 ; TrainingCycle++) {

/******************************************************************

Randomize order of training patterns

******************************************************************/

for ( p = 0 ; p < PatternCount ; p++) {

q = random(PatternCount);

r = RandomizedIndex[p] ;

RandomizedIndex[p] = RandomizedIndex[q] ;

RandomizedIndex[q] = r ;

}

Error = 0.0 ;

/******************************************************************

Cycle through each training pattern in the randomized order

******************************************************************/

for ( q = 0 ; q < PatternCount ; q++ ) {

p = RandomizedIndex[q];

/******************************************************************

Compute hidden layer activations

******************************************************************/

for ( i = 0 ; i < HiddenNodes ; i++ ) {

Accum = HiddenWeights[InputNodes][i] ;

for ( j = 0 ; j < InputNodes ; j++ ) {

Accum += Input[p][j] * HiddenWeights[j][i] ;

}

Hidden[i] = 1.0 / (1.0 + exp(-Accum)) ;

}

/******************************************************************

Compute output layer activations and calculate errors

******************************************************************/

for ( i = 0 ; i < OutputNodes ; i++ ) {

Accum = OutputWeights[HiddenNodes][i] ;

for ( j = 0 ; j < HiddenNodes ; j++ ) {

Accum += Hidden[j] * OutputWeights[j][i] ;

}

Output[i] = 1.0 / (1.0 + exp(-Accum)) ;

OutputDelta[i] = (Target[p][i] - Output[i]) * Output[i] * (1.0 - Output[i]) ;

Error += 0.5 * (Target[p][i] - Output[i]) * (Target[p][i] - Output[i]) ;

}

/******************************************************************

Backpropagate errors to hidden layer

******************************************************************/

for ( i = 0 ; i < HiddenNodes ; i++ ) {

Accum = 0.0 ;

for ( j = 0 ; j < OutputNodes ; j++ ) {

Accum += OutputWeights[i][j] * OutputDelta[j] ;

}

HiddenDelta[i] = Accum * Hidden[i] * (1.0 - Hidden[i]) ;

}

/******************************************************************

Update Inner-->Hidden Weights

******************************************************************/

for ( i = 0 ; i < HiddenNodes ; i++ ) {

ChangeHiddenWeights[InputNodes][i] = LearningRate * HiddenDelta[i] + Momentum * ChangeHiddenWeights[InputNodes][i] ;

HiddenWeights[InputNodes][i] += ChangeHiddenWeights[InputNodes][i] ;

for ( j = 0 ; j < InputNodes ; j++ ) {

ChangeHiddenWeights[j][i] = LearningRate * Input[p][j] * HiddenDelta[i] + Momentum * ChangeHiddenWeights[j][i];

HiddenWeights[j][i] += ChangeHiddenWeights[j][i] ;

}

}

/******************************************************************

Update Hidden-->Output Weights

******************************************************************/

for ( i = 0 ; i < OutputNodes ; i ++ ) {

ChangeOutputWeights[HiddenNodes][i] = LearningRate * OutputDelta[i] + Momentum * ChangeOutputWeights[HiddenNodes][i] ;

OutputWeights[HiddenNodes][i] += ChangeOutputWeights[HiddenNodes][i] ;

for ( j = 0 ; j < HiddenNodes ; j++ ) {

ChangeOutputWeights[j][i] = LearningRate * Hidden[j] * OutputDelta[i] + Momentum * ChangeOutputWeights[j][i] ;

OutputWeights[j][i] += ChangeOutputWeights[j][i] ;

}

}

}

/******************************************************************

Every 100 cycles send data to terminal for display and draws the graph on OLED

******************************************************************/

ReportEvery1000 = ReportEvery1000 - 1;

if (ReportEvery1000 == 0)

{

Serial.print ("TrainingCycle: ");

Serial.print (TrainingCycle);

Serial.print (" Error = ");

Serial.println (Error, 5);

toTerminal();

if (TrainingCycle == 1)

{

ReportEvery1000 = 99;

}

else

{

ReportEvery1000 = 100;

}

}

/******************************************************************

If error rate is less than pre-determined threshold then end

******************************************************************/

if ( Error < Success ) break ;

}

}