Introduction: Model (X55A) Humanoid Robot
// A little about myself : I am a doctoral researcher (Scholarship) and assistant lecturer in design theory at the University of Huddersfield in England, studying the field of bio-mimetic robotics. I have a Masters (M.Res -Scholarship) with distinction in Advanced Animatronic Systems and a first class degree in Creative Programming (Multimedia Design - B.A hons). I have been studying animatronics, A.I theory, animation theory and robotics at the university for the past several years. I have also been very fortunate to work alongside some of the industries top animatronic and robotic engineers during my academic research.
(https://www.hud.ac.uk/schools/artdesignandarchitecture/research/researchstudents/)
I want to start by introducing you to the concept of animatronics / bio-mimetic robotic design. The field of robotic design encompasses variable disciplines from engineering, programming and design to chemistry, mathematics and physics. Of course you are not expected to know all of these areas in excruciating detail.. but it is always good to have a simple grasp of these factors when designing / prototyping and finally constructing a robotic system. Animatronics / hyper realistic bio-mimetic robotics different from traditional robotics in the exterior emulation of naturalistic function and form. For example, a robot may be represented as a metal cube with no reference to animalistic design, whereas animatronics via definition are founded on conceptual / living organic simulation.
//Except from my Masters thesis on Animatronic Systems ( Full version available from The University of Huddersfield Repository)
Digication (2015) state ‘the term (Animatronic) refers to a robot or mechanical construct that artificially simulates the physicality’s and functions of a living creature. The origin of the word animatronic derives from the appellation (Audio-Animatronic), a terminology coined via the Walt Disney Corporation in the 1960’s to symbolise their innovative and autonomous mechanical theme park attractions. However it is pertinent to clarify that, animatronic character systems existed prior to the 1960’s. Marchand (2007) literates, ‘animatronic models (in the formation of simplistic automaton devices) can be traced back to ancient times[1]. The first automaton character implemented in a cinematic feature appeared in the silent film, Eagles Nest (1907) in the form of a rudimentary manually operated mechanical taxidermy eagle (Flueckiger 2011). However these simplistic puppets evolved into larger more complicated mechanical constructs over time. Finch (1984 p.45) explains, ‘the first large-scale cable operated automaton was exhibited in the film Die Nibelungen (1924) the model simulated the presence, veracity and realism of a life-sized mythical creature.
Yet, sophisticated animatronic characters as we acknowledge them today (Electro-mechanical devices) predominantly emerged in the late 1930’s, these autonomous animatronic characters where initially implemented as marketing tools via businesses’ and showcased in world-wide exhibits and fairs with the objective of attracting consumers into purchasing goods and services’[2].
The application of autonomous animatronic systems shifted towards the entertainment industry in the 1950’s. Efteling Forest Resort ‘Netherlands’ (1952) became the first theme park in history to exhibit animatronic characters as part of their fairy-tale land attraction environments, (Efteling 2015). In 1963 the audio-animatronic Tiki Birds became one of the primary visitor attractions at Disneyworld America, (Hobbs 2015). A year later the Walt Disney Corporation further implemented the first electro-mechanical animatronic character in a motion picture. (Cashwell 2014 p.4; Trahan et al 2004 p.55; Ichbiach 2005 pp.104-120) explain, Walt Disney’s ‘Mary Poppins (1964) is widely acknowledge within the film industry as the first cinematic feature in history to utilise a sophisticated audio-animatronic character (Robin). The film was a box office hit ranking number one in the US box office charts of 1964.
However, animatronic characters did not completely re-establish as the forefront innovation of the cinematic special effects industry until eleven years later in the Hollywood blockbuster Jaws (1975). (Choi 2010; Benchley 2012 pp.11-24; Gottlieb 2005 pp.5-12) elucidate, The practical animatronic special effects presented in the film Jaws prescribed the cinematic audience with a new and unprecedented psychological and physiological visceral experience beyond any previous cinematic encounter. The prodigious success of the films practical special effects drove the development and application of animatronic character creation into exciting new territories, thus placing animatronic special effects as one of the foremost creative and innovative cinematic exponents in the history of cinema.
In contrast to this, (Finch 2013 pp.6-34; Beck 2004 pp.344-373; Grant et al 2014 p.138) stipulate, Computer Generated Imagery (CGI) animation, is the process of virtually simulating objects, characters and environments using computerised optical technology. The first film to instigate a three-dimensional (3D) CGI artefact was the cinematic feature (Future World 1976). However, the CGI effects presented in the film lacked definition and substance.
A year later the Hollywood cinematic feature, Star Wars Episode IV (1977) became the second film to successfully implement 3D CGI animation. The authenticity of the highly defined graphics exhibited in the feature established the limitless possibilities of CGI as an immersive visual sensory tool and profoundly contributed to the immediate overnight success of the film. The operation of CGI has since grown exponentially and is still considered today as the primary medium for creating characters and environments in modern cinematic features (Haywood, 2013 p70; Osmond, 2010 p84’ Reidy, 2008).
Despite the apparent dominance of CGI characters systems; (Rickett 2006; McDougal 1995 p.636; Robin 1995; Skarro 2015) argue. practical animatronic special effects have historically, predominantly played an extensive role in the development and creation of imaginative and innovative cinematic characters. These electro-mechanical models present highly detailed aesthetical verisimilitude and simulative operations at close interview in comparison to other widely utilised and advanced modern virtual mediums such as CGI and Hybrid formats (motion capture systems and live action integration software).
(Frans et al 2005 p.251; StarBurst 2014 p.20; Camillo 2014; Amidi 2014; Cornea 2007 p273) further suggest, animatronic systems produce a definition, materiality and functionality that present the cinematic audience with a heightened visual visceral sensory experience that has the potentiality to cause actual psychological and physiological bodily affects. However, when similar simulative operations are transfigured via virtual modules there is an apparent tendency to lose the essence, presence and quality that is encapsulated and grounded in the organic actuality of living creatures, physical objects and natural environments. (Newton 2014; Dixon & Foster 2008 pp.380-383; Hollinger & Gordon 2002 p.81; Hayward 2013 pp.20-21) concur animatronic characters continue to produce new and innovative technologies and experiences that express visual sensory superiority in aesthetics, operation and functionality over the most advanced modern CGI and Hybrid formats.
[1] Han Fei Zi (1066) Mechanical animal water clock; Leonardo Da Vinci (1515) Automaton Lion
[2] Sparko the Dog (1939); Pastilles Valda (1930’s) ‘circus’ advertising panel; Vaudeville Circus Display (1930’s)
So lets start making !!
Step 1: DESIGN
// Example of some of my 3D design work for this project from (2016)
Lets start !! First of all we need to think about our character, its environment and purpose. From these attributes we can configure the systems capacity, operation and requirements. I recommend starting your design process on good old fashioned pen and paper. A good exercise is start off by drawing characters you are familiar with e.g childhood cartoon favourites or past designs and try to visualise how that character functions kinetically. This may sound like a simple process but it is crucial as it is the first step in visualising the how the system will perform. Imagine coming face to face with your character, think about scale, materiality and function. I was once told I had the best workshop with all the cutting edge technology at my 24 hour disposal.. all I had to do was imagine it in my head and run a simulation of procedure.
Equipment;
Thermo plastic resin
A mould - polystyrene head
Silicone Glue - caulk
Pigment poweder
False Teeth
To make the outer skull for this project the process is fairly simple, the skeletal form I made out of thermo-plastic I took it from the internal dimensions of a resin head cast I had made for a different project (but you could even use your own head as a base form) and it was a great start for this system. Of course you can make your own using a cast of whatever you have designed (3D Printed, Clay Sculpt or life-cast) and then get the correct size by examining the exoskeleton against the final form. You can use whatever material you think is most suitable, it is normal to use standard casting resin but for the sake of this project I thought thermoplastic would be a great safe alternative for beginners.
The Jaw is made using the same process but building up the levels in the plastic as you go and then if you run boiling hot water over the areas you are working on it solidify s them areas together and it starts to feel like you are working with clay. The gums are made from silicone glue pigmented with red dye, you can get this on ebay and its all dirt cheap. The teeth themselves are a set I bought from an online auction as I thought for the purpose of this system it would be easier just to get the basic elements in so I can spend time on the system. However, I do have a previous instructable on a robot I made called Aldous where I made the teeth for him out of themoplastic resin. This is a pretty good way of going about this process compared to using casting resin as its less messy and you are guaranteed the results you want as you can make changes every step of the process. If you want to find out how to make a good set of realistic monster or human teeth the stan winston studio tutorials have some nice videos on how to go about that.
Step 2: System Design
Now we are not just thinking about our characters form, we are imagining it in its natural environment. If for instance our character is a humongous hulking monster it will have constraints in terms of movement and momentum. These physicality's will apply to our system design. We can achieve a more natural operation if we prototype our system before implementing it in the final project. You can do this on paper, practical prototype or virtual simulation. Each of these processes has advantages and disadvantages. Practical prototyping takes time and money but will give you a greater feel for the mechanism and its capacity / limitations.. virtual simulation has a tendency to mis-represent dynamics and material / mechanical constraints such as weight and external factors that may cause system failure.
Suggested Design Equipment for making your own model
Pen and Paper
Autodesk Inventor Pro
Fritzing
Dell: Solid Edge
Project Equipment List for X55A:
Spektrum DS821 Digital Servo x 6
Micro Servo x 4
Arduino Uno
Servo Shield
Splitter Cables (futaba)
Extension Cables (futaba)
Bass Guitar A string
5cm spings
8 hole meccano strip x 2
5 hole meccano strip x 2
10 hole meccano strip x 2
Milliput
M3 Bolts and Nuts x 50 pack
Tri-bind fishing wire - 100lb
Aluminium Servo Horns x 6
4 x multifuction aluminium bracket
2 x long aluminium bracket
Push Pull rods x 4
Loops screws x 4
Thermoplastic resin
Meccano small L brackets
Meccano small U bracket
Black m3 screws x 20
Rubber supports x 2
Small Aluminium U bracket x 2
m4 washers x 8
m4 nuts and bolts x 8
Aluminium L bracket x 2
Step 3: Jaw and Tongue Mechanism
Equipment
Specktrum Servo x 2
Meccano strips 2 x 8
Meccano strips 2 x 5
U bracket x 2
Thermoplastic
Multi functional bracket x 2
m3 nuts and bolts
Cable tie
Servo horn x 2
Silicone Glue
Cellophane
To start with connected two servos into two multinational brackets and connect a U bracket to the end of one of them in standard formation. The next stage i to attach the second servo to the multi-functional bracket of the second servo using a circular aluminium servo horn. (as pictured) The next stage is attach another U bracket to the mutli-functional bracket of the second servo (as pictured) this gives you a set of jaws that can be extended to fit the teeth form. Using the meccano strips, attach each strip to the side of both U brackets and secure with m3 nuts and bolts. The false teeth then attach to the end of these strips using themoplastic or milliput to form a secure bond. This process gives you both the up / down jaw movement and left / right operation.
//Tounge
The two videos provided in this step are examples of the tongue mechanism used in this project. To start with take the cable tie and attach mechano small L brackets equally spaced from the start secure them in place with m3 nuts and bolts. This format should look something like this (< _L__L__L__) the 100lb fishing wire is the threaded through the loops and tied with a knot at the end of the tongue. Therefore, as you pull the string and hold the end of the cable tie in your hand the point at the other end (tongue tip) will flex upwards.
This is the foundation for the tongue mechanism, to safely seal this function from the silicone covering, wrap it in a thin layer of cellophane. Once you have secured the system, mix red and white powder paint together and add the silicone glue, mix together into a paste. Apply the compound to the cellophane using your fingers and smooth down using a small drop of oil. This gives you a great smooth finish and you can also add detail using a wooden skewer.
Attach the loose end of the fishing wire to a servo horn, secure a servo to the underside of the jaw mechanism using the bracket holes on the sides of the servo (m4 bolts) and screw in the servo arm. Now, when the the servo arm moves back and fourth it pulls on the fishing wire raising and lowering the silicone tongue (video)
Step 4: Lips and Cheeks
Equipment
Fishing wire
Springs
Themoplastic
Bass guitar string
10 hole meccano strips x 4
M4 nuts and bolts
m4 washers
Aluminium sero arms
4 x spektrum servos
Flat head m3 nuts x 2
//Lips
The lips are a really simple but effective mechanism, cut the bass guitar string into two 15 cm lengths and secure to each side of the meccano jaw brackets using theroplastic. You need to leave a 5mm gap all the way around to allow the guitar string to pass the silicone gums on the jaw mechanism. Attatch 4 x small meccano L brackets to the top and bottom plastic areas of the false teeth using small m2 screws. On the top two L brackets thread 2 rubber stoppers (as pictured) this stops the top springs riding up over the top of the small upper l brackets. Attach the fishing wire to the bass guitar string in 4 places (2 top, 2 bottom) and secure in place with thermoplastic. Thread the springs through the fishing wire on the upper jaw. Drill two small holes in the resin of the bottom jaw and thread the string for the bottom lips though the holes. Attach a servo horn to the end of each fishing wire, when you pull the wire the lips should raise and lower. Take the bottom lip servos and secure them together in parallel using the 10 hole meccano strip. To ensure they stay in place use m4 plastic washers, seal the strips to the bottom of the jaw using thermoplastic resin. Secure to top two servos to the upper part of the jaw (see picture) using the remaining meccano L brackets threaded through the servo holes and secured in place with M4 nuts and bolts.
//Cheeks
The cheeks are simply two servos attached to each side of the jaw frame with 2 x 10 hole meccano strips that are bent into a semi circle. At the end of the strips attach at cheek structure (formed from milliput) and secure these in place using the flat head m3 bolts.
Step 5: Eyes
Equipment
4 x micro servos
2 x splitter Y cables
4 x plastic servo arms
2 x ball and socket joints
4 x push pull rods
Artificial Eyes
Glue
2 x L brackets
2 x 10 cm Meccano Rods
To start with take the two black aluminium robotic brackets and secure the 10cm rods in place using washers and glue. On the end of each rod secure a ball and socket joint using the screws provided with the element. The servos need to work in formation so under L bracket secure a micro servo in place using thermo plastic (as pictured) then at the right hand side of each L bracket secure the other two micro servos together. Attach servo arms to each micro servo and screw in the end of the push pull bracket using m2 screws. Take the artificial eyes (ebay - taxidermy eyes) and glue them onto each end of the ball and socket joint using gorilla glue. Screw in 4 small screw loops (as pictures) into the resin eye form at the back of the eye (if you are using glass eye just add a layer of thermo-plastic to the back of them and use that to screw the loops in place) The loops should be attached to the eyes at 12 oclock and 3 oclock on each eye so they function together as human eyes. Once connected to splitter cables the system should function like in the video provided.
Step 6: Eye Brows and Eye Lids
Equipment
Thermoplastic
m3 black nuts and bolts
bass guitar string
3 x servos
milliput
The eye brows are simply made by carving two rims out of the thermoplastic skull above each eye, then using white miliput make two eye brow forms to fill in the gaps. The ends of the milliput need to be anchored to the skeleton using a meccano L bracket and m3 nits and bolts. The other two side (inner) parts of the eye brow at the ends that are going to be attached to a servo. A single servo will control both eyebrow functions together, simple get the bass guitar string and make a coil at each end by wrapping it around a small screw driver, this forms a psh pull rod mechanism that can also flex slightly. Secure two ends of the hoops on the bass guitar sting to the milliput eyebrows using nuts and bolts (as pictured) the other two ends attach to a servo horn one on top of each other and held in place with a washer m3 bolt.
The eye lids are formed by taking a positive from the artifical eye form using thermo plastic, cut this in hlaf to give you the upper and lower eye lids. Simply run a small brass rod through each side of the eyes so they function as shutters, connect these together using the standard push pull rod formation and use two servos to power the upp and lower sections of the eyes.
Step 7: Control R.C / Arduino
In the spirit of open source sharing and trying to get as many people involved in animatronic character systems as possible, I have decided to include my base scripture for all my projects for you to adapt as you need, please credit if you decide to use this in your own project.
..............................................................................................................................................................
Arduino Script
#include int c=0;
int pos = 0;
int talkmode=0;
int oldtalkmode=0; long rfac;
long mpos; int eyedel=0;
int pose =140;
int poslip =90;
int eyeposh=57;
int eyeposhinc=1;
int posbot=90;
//int stopy=90;
Servo myservo; // create servo object to control a servo
// a maximum of eight servo objects can be created
Servo myservo2;
Servo myservo3;
Servo myservo4;
Servo myservo5;
Servo myservo6;
Servo myservo7;
Servo myservo8;
int talkcount=255;
//eventually use audio stop trigger
int doclosemouth=0;
int turnmode=0;
int turnmode2=0;
int globmode=1; //1 is move about 2 is eyetwitch
int wcount;
int pcount;
int mystart=1;
int notalkcount=0;
void setup(){
Serial.begin(9600);
wcount=0;
pcount=0;
pinMode(1,OUTPUT);
pinMode(8,OUTPUT);
pinMode(5,OUTPUT);
pinMode(4,OUTPUT);
pinMode(13,OUTPUT);
pinMode(11,OUTPUT);
pinMode(10,OUTPUT);
pinMode(12,OUTPUT);
pinMode(3,OUTPUT);
// pinMode(A3,OUTPUT);
// pinMode(A4,OUTPUT);
myservo.attach(1); // attaches the servo
myservo2.attach(8); //left right
myservo3.attach(5); //up down
myservo4.attach(3); //eyes left and right
myservo5.attach(4);
myservo6.attach(12);
myservo7.attach(11);
myservo8.attach(10);
int oldtalkmode=0;
// myservo3.attach(A3);
// myservo4.attach(A4);
} void loop(){
// if(talkmode==1){
// pose=140;
// poslip=90;
// posbot=100; // }
// if(mpos>131){
// notalkcount++;
// }else{
// notalkcount==0;
//}
// Serial.print(notalkcount);
// if(notalkcount>2000){
// talkmode=0;
// oldtalkmode=0;
// notalkcount=0; // } // }
int t=random(2000);
int pos=random(400);
if(t>1998){ if(pos>195){ int v=25+random(60);
int pos2=140+random(60);
myservo4.write(v);
myservo5.write(pos2);
} }
while(Serial.available()>0){ int din=Serial.read();
if(talkmode<9) oldtalkmode=talkmode;
if(din<8) talkmode=din;
if(din>8 && din<70) turnmode=din;
if(din>70 && din<201) turnmode2=din;
// if(din==201 && talkmode==0) {
// globmode=2;
// mpos=134;
// }
// if (globmode=1);
// Serial.print("TM="+talkmode);
// if(globmode==1){
// eyeposh=57;
// myservo4.write(eyeposh);
// } }
globmode=1;
//force it into this mode if(globmode==1){
//movement if(talkmode==1){
//wait for start of talking
if(mystart==1){
int dropout=0;
while(analogRead(3)==0){
updatestuff();
} mystart=0; // Serial.println("hello"); }; //count pauses
if(mystart==0){ int v=analogRead(3);
// Serial.print("v:");
// Serial.print(v);
// Serial.print(" ");
if(v==0){ pcount++; if(pcount>10){ mystart=1; } }else{ doclosemouth=0;
pose=140;
poslip=90;
posbot=100;
if(pcount>5){ pcount=0;
wcount++;
doclosemouth=1;
// Serial.println(wcount);
pcount=0;
// pose=140;
// poslip=90;
// posbot=100;
// mystart=1;
} } //? } //? //talking // delay(10+random(2));
pose=140+random(60);
poslip=2+random(32);
posbot=50+random(30);
//delay (100);
myservo6.write(pose);
myservo7.write(poslip);
myservo8.write(posbot);
rfac=random(100);
if(rfac<45){
// mpos=random(130);
mpos=99+random(50);
delay(60+random(40));
// delay(random(11));
}
}else{
//core bit if(doclosemouth==1){ mpos=134;
pose=140;
poslip=90;
posbot=100;
// myservo8.write(100);
//myservo6.write(140);
// myservo7.write(90);
// myservo8.write(90);
} } int r=analogRead(5);
if(r<1000){ mpos=133;
pose=140;
poslip=90;
posbot=90;
// myservo8.write(100);
talkmode=0;
} if(talkmode==0){
// myservo6.write(140);
// myservo7.write(90);
// myservo8.write(100);
pose=140;
poslip=90;
posbot=90;
mpos=132; // close mouth } if(turnmode>9 && turnmode<70){ //left/ right myservo2.write(turnmode); // Serial.print("TM="+turnmode);
// talkmode=oldtalkmode;
} if(turnmode2>70){ //left/ right int sv=turnmode2-70;
myservo3.write(sv);
// Serial.print("TM="+turnmode);
// talkmode=oldtalkmode;
} if(mpos>130 && talkmode>0) myservo4.write(57); //up/down here myservo.write(mpos);
}//end of globmode 1;
if(globmode==10){ //never = 10 so disables // int v=analogRead(3);
/// if(v>20){ // globmode=1;
// talkmode=1;
// } updatestuff();
//start of eye loop eyedel++;
if(eyedel==1000){ eyedel=0;
myservo4.write(eyeposh);
eyeposh=eyeposh+eyeposhinc;
if(eyeposh==90 || eyeposh==25) { eyeposhinc=eyeposhinc*-1; int d=250;
d=d+random(1750);
delay(d); } } } }
void updatestuff(){ int t=random(2000);
if(t>1998){ int v=25+random(60);
myservo4.write(v);
int pos=random(400);
if(pos>195){ int pos2=140+random(60);
myservo5.write(pos2);
} } // if(mpos>131){ //notalkcount++;
// }else{ // notalkcount==0;
//} //if(notalkcount>2000){ // talkmode=0;
// oldtalkmode=0;
// notalkcount=0;
// } while(Serial.available()>0){ int din=Serial.read();
// if(talkmode<9) oldtalkmode=talkmode;
// if(din<8) talkmode=din;
// if(din==1){ // globmode=1;
// talkmode=1; // eyeposh=57;
// myservo4.write(eyeposh);
// } if(din>8 && din<70) turnmode=din;
if(din>70 && din<201) turnmode2=din;
// Serial.print("TM="+turnmode);
// if(din==201 && talkmode==0) globmode=2;
// if(din==202) globmode=1;
// if(globmode==1){ // eyeposh=57;
// myservo4.write(eyeposh);
// } } if(turnmode>9 && globmode==1){ //left/ right myservo.write(135);
// myservo8.write(stopy);
myservo2.write(turnmode);
// Serial.print("TM="+turnmode);
// talkmode=oldtalkmode;
} if(turnmode2>70 && globmode==1){ //left/ right int sv=turnmode2-70;
myservo3.write(sv);
myservo6.write(140);
myservo7.write(90);
myservo8.write(90);
// Serial.print("TM="+turnmode);
// talkmode=oldtalkmode; } }
..............................................................................................................................................................
Processing Script
import processing.serial.*;
/* A little example using the classic "Eliza" program.
Eliza was compiled as a Processing library, based on the java source code by Charles Hayden: htp://www.chayden.net/eliza/Eliza.html
The default script that determines Eliza's behaviour can be changed with the readScript() function. Intructions to modify the script file are available here: http://www.chayden.net/eliza/instructions.txt *
/ max is 67 on sweep
import codeanticode.eliza.*; Serial myport; int dummy=8; int sendx=0; Serial myport2; // neck motor int drawskeleton=0; //1 / 0 int lastsentx=-1; int lastsenty=-1;
int archsenty=-1; int archsentx=-1;
int eyecount=0; //used for sampling movement
Eliza eliza; PFont font; String elizaResponse, humanResponse; boolean showCursor; int lastTime; PImage bg1a;
int closestValue; int closestX; int closestY; int lastcx; int lastcy;
float targx; float targy;
//simple openni import SimpleOpenNI.*;
float globx, globy;
float oldglobx, oldgloby;
SimpleOpenNI context; color[]
userClr = new color[]{ color(255,0,0),
color(0,255,0),
color(0,0,255),
color(255,255,0),
color(255,0,255),
color(0,255,255)
}; PVector com = new PVector();
PVector com2d = new PVector();
//end simpleopenni
void setup() { size(1200, 786);
println(sketchPath);
//si context = new SimpleOpenNI(this);
if(context.isInit() == false) { //println("Can't init SimpleOpenNI, maybe the camera is not connected!");
exit();
return;
} // enable depthMap generation context.enableDepth();
// enable skeleton generation for all joints
// context.enableUser();
background(200,0,0);
//end si bg1a=loadImage("bg1.jpg");
//println(Serial.list());
myport=new Serial(this, Serial.list()[5],9600);
//myport2=new Serial(this, Serial.list()[??????],9600);
// When Eliza is initialized, a default script built into the
// library is loaded. eliza = new Eliza(this);
// A new script can be loaded through the readScript function.
// It can take local as well as remote files.
eliza.readScript("scriptnew.txt");
//eliza.readScript("http://chayden.net/eliza/script");
// To go back to the default script, use this: //eliza.readDefaultScript();
font = loadFont("Rockwell-24.vlw");
textFont(font);
printElizaIntro();
humanResponse = "";
showCursor = true;
lastTime = 0; }
void draw() { while(myport.available()>0){ int dat=myport.read();
/// println(""+dat);
} eyecount++;
//println("EYECOUNT:"+eyecount);
if(eyecount>=30){ println("diffx="+abs(closestX-lastcx)+" diffy="+abs(closestX-lastcy));
// println(archsenty+" "+closestY+" "+archsentx+" "+lastsentx);
//if(archsenty==-1) archsenty=lastsenty;
//if(archsentx==-1) archsentx=lastsentx;
if(abs(closestY-lastcy)<30 && abs(closestX-lastcx)<30){ // archsenty=lastsenty;
// archsentx=lastsentx;
// for(int lop=0;lop<100;lop++){ println("WOULD GO INTO EYE TWITCHING");
// myport.write(201);
lastcx=closestX;
lastcy=closestY;
}else{
//if(abs(lastsenty-archsenty)>45 && abs(lastsentx-archsentx)<45){ println("WOULD GO BACK TO MOVEMENT");
lastcx=closestX;
lastcy=closestY;
// myport.write(202);
// } } eyecount=0;
} image(bg1a,0,0,width,height);
//background(102);
if(globx!=oldglobx){ sendx=int(abs(globx));
// sendx=8+(sendx/8);
oldglobx=globx;
// myport.write(sendx);
} if( sendx>9 && lastsentx!=sendx){ //println("sending neck positions"+sendx);
if(abs(lastsentx-sendx)>35) eyecount=145;
myport.write(sendx);
// UNCOMMENT FOR PEOPLE TRACKING lastsentx=sendx;
} //println("neck y:"+int(globy));
if(random(10)>4){ int outy=70+int(globy);
if(outy>200) outy=200;
//println("outy="+outy);
//HERE IS THE LINE SENDING THE NECK Y COORDINATES if(lastsenty!=outy){ if(abs(lastsenty-outy)>35) eyecount=145;
myport.write(outy);
//println("OUTY:"+outy);
lastsenty=outy;
} }
//DUMMY SWEEP STARTS HERE if(random(10)>2){ // myport.write(dummy);
////println("DUMMY:"+dummy);
//dummy++;
//if(dummy>170) dummy=9;
//myport.write((70+dummy));
////println("neckyyyyyyyy"+(70+dummy));
} //DUMMY SWEEP ENDS HERE
fill(255);
stroke (111);
text(elizaResponse, 30, 450, width - 40, height);
fill(0);
int t = millis();
if (t - lastTime > 500) { showCursor = !showCursor;
lastTime = t;
} if (showCursor) text(humanResponse + "_", 30, 600, width - 40, height);
else text(humanResponse, 30, 600, width - 40, height);
// simpleopennidrawmethod();
closestpixdrawmethod();
}
void closestpixdrawmethod(){ closestValue = 8000;
context.update();
// get the depth array from the kinect int[] depthValues = context.depthMap();
// for each row in the depth image for(int y = 0; y < 480; y++){
// look at each pixel in the row for(int x = 0; x < 640; x++){
// pull out the corresponding value from the depth array
int i = x + y * 640;
int currentDepthValue = depthValues[i];
// if that pixel is the closest one we've seen so far
if(currentDepthValue > 0 && currentDepthValue < closestValue){
// save its value closestValue = currentDepthValue;
// and save its position (both X and Y coordinates)
closestX = x; closestY = y;
} } } float scfac=67.0/640;
globx=(closestX*scfac)*.7;
targy=(closestY*scfac)*3.2;
globy=globy+((targy-globy)/8);
// globy=targy;
// //println(globx);
//draw the depth image on the screen
// image(kinect.depthImage(),0,0);
// draw a red circle over it,
// positioned at the X and Y coordinates
// we saved of the closest pixel. // fill(255,0,0);
// ellipse(closestX, closestY, 25, 25);
}
void keyPressed() { if ((key == ENTER) || (key == RETURN)) { //println(humanResponse); //first scan for keywords
elizaResponse = eliza.processInput(humanResponse); //println(">> " + elizaResponse); String[] out={elizaResponse};
saveStrings("/Users/carlstrathearn/Desktop/test.txt",out);
delay(10);
//println(sketchPath+"/data/applescriptbridge.app");
open(sketchPath+"/data/applescriptbridge.app");
myport.write(1);
humanResponse = "";
} else if ((key > 31) && (key != CODED)) {
// If the key is alphanumeric, add it to the String
humanResponse = humanResponse + key;
} else if ((key == BACKSPACE) && (0 < humanResponse.length())) { char c = humanResponse.charAt(humanResponse.length() - 1);
humanResponse = humanResponse.substring(0, humanResponse.length() - 1); } }
void printElizaIntro() { String hello = "Hello.";
elizaResponse = hello + " " + eliza.processInput(hello);
//println(">> " + elizaResponse);
}
void simpleopennidrawmethod(){ context.update();
// //println("gx="+globx+" GY="+globy); // draw depthImageMap //image(context.depthImage(),0,0); if(drawskeleton==1) image(context.userImage(),0,0); // draw the skeleton if it's available int[] userList = context.getUsers(); for(int i=0;i
vertex(com2d.x - 5,com2d.y);
vertex(com2d.x + 5,com2d.y);
endShape(); fill(0,255,100);
text(Integer.toString(userList[i]),com2d.x,com2d.y);
} } } }
void drawSkeleton(int userId) {
// to get the 3d joint data /* PVector jointPos = new PVector(); context.getJointPositionSkeleton(userId,SimpleOpenNI.SKEL_NECK,jointPos);
//println(jointPos); */ /
/println(SimpleOpenNI.SKEL_HEAD);
if(random(100)>97){ PVector jointPos = new PVector(); context.getJointPositionSkeleton(userId,SimpleOpenNI.SKEL_HEAD,jointPos);
//println(jointPos.x);
//println(jointPos.y);
//println(jointPos.z);
globx=jointPos.x;
globy=jointPos.y;
}
if(drawskeleton==1){ context.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK); context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);
context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE); context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);
context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE); context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT); }
}
// ----------------------------------------------------------------- // SimpleOpenNI events
void onNewUser(SimpleOpenNI curContext, int userId) { //println("onNewUser - userId: " + userId); //println("\tstart tracking skeleton"); curContext.startTrackingSkeleton(userId); }
void onLostUser(SimpleOpenNI curContext, int userId) { //println("onLostUser - userId: " + userId); }
void onVisibleUser(SimpleOpenNI curContext, int userId) { ////println("onVisibleUser - userId: " + userId); }
..............................................................................................................................................................
Apple Script
set theVoices to
{"Alex", "Bruce", "Fred", "Kathy", "Vicki", "Victoria"}
set thePath to (path to desktop as Unicode text) & "test.txt"
set the_file to thePath
set the_text to (do shell script "cat " & quoted form of (POSIX path of the_file))
set the clipboard to the_text
set theSentence to the clipboard
log (theSentence)
say theSentence using ("Bruce") speaking rate 140 modulation 5 pitch 15
on readFile(unixPath)
return (do shell script "cat /" & unixPath)
end readFile
To use voice recognition simple activate the function on your apple mac from the system menu and out put it to the processing / eliza chat interface instead of using a keyboard. (You will need to set up the microphones in the kinect sensors for this to work)