Introduction: Julia's Eyes _ a Sound-reactive Cinemagram

Whoop Whoop goes the Eye.

In my endless train rides in the morning, I once had a vision while snoozing to some awesome dubstep tunes. What if a photo of the eyes reacted to sound in different ways, the pupil and the iris, changing shape and color according to different frequencies?
What a great coincidence that I learned a super easy programming language 2 weeks ago and what a great coincidence that we assembled on Saturday and Sunday in the university to produce something awesome for Interface and Communication Design. (In fact, this was the literal task: ‘Make something awesome’)

So I crafted this software, free to download, modify, project on walls.

Step 1: Downloading the Primary Software

Case 1: You want to use the software with mouse and keyboard.

Download the ZIP-file here: http://www.kamibox.de/files/Julias_Eyes.zip

There are 5 versions of the software in the ZIP-file:
Windows 32
Windows 64
Mac OSX
Linux 32
Linux 64


Choose your Operating System and start the app. The controls are:
+   Increase Sensitivity
-   Decrease Sensitivity
SPACE   Change Mode
MOUSE   Move Eyes

What does it do?
Julia listens to the sound coming into Line In (if your PC has an internal mic, that is Line In)
Her eyes react to sound in 2 different modes:
Size Mode: The left iris changes size according to the amount of bass, the right iris according to mid-low frequency, the left pupil to mid-high-frequency, right pupil high pitch. (from your point of view)
Color Mode: Same reaction, but size changes aren’t too severe, instead the color of the iris changes.
If you turn the sensitivity down to Zero, there will be no eye morphing any more, only the movement you do with the mouse.
Quick loud noises frighten her. She blinks when you clap your hands or the music starts to go wild.

Now how should this be of great importance for humankind?
It isn’t. I thought of it mainly as a beamer projection for DJs and VJs, visualizing their sound in a creepy yet fascinating way. (Thanks, Youtube-comment-man, for this substantial summary)
Or without using the sound feature; You can just project it on a wall, following people. Scary enough.

If you are happy with that, that’s it.
Next step would be using the iPhone to control it or changing the code, or making the cursor disappear.

Step 2: Downloading the Secondary Software

Case 2: You want to control the eyes with your iPhone.

In this case, you will have to pay 3,99€ or 4,99$ (?) for TouchOSC for iPhone or iPad. Unfortunately, this is the only alternative. For Android, it is free, but I am not sure how it works, as I don’t have the opportunity to test it.

On your PC or Mac, you will have to install the software from http://www.hexler.net, because you can’t design the layout of the remote on your iPhone, you need to do it on your computer and transfer it to the iPhone.

What is OSC?
OSC is short for Open Sound Control, but it just doesn’t control sound. The term is a bit misleading I guess. It is just often used to control something with sound because there are so many sliders and Pads and everything, but all it does, is basically send values between 0 and 1 around your network.

In the ZIP-file, you find an TouchOSC file. Open the document in OSC on your computer and transfer it to your phone, following the instructions from OSC.
On TouchOSC, you have to be in the same local wireless network, you have to write your IP-address or name of the computer into the host-box, and you have to write 12000 into the box port (outgoing).
When everything goes well, you now have the layout on your iPhone. Move the slider and if a green light flashes on top while you move something, it should send a message into your network.


When you start Julia’s Eyes now, it should instantly work.
The gray pad steers her eyes, the mode button switches the mode (Color / Size), and the sensitivity slider controls.. well, the sensitivity. Left means no reaction to sound at all.


Whether you use your phone or PC to control Julia, I recommend downloading Processing and use the original code Julias_Eyes.pde in presentation mode instead, because then you don’t see the cursor, and the mode-button on the iPhone works reliably and not only like 80%. Then, you need to install the oscP5 library.
Processing is the program and the language I wrote the code in. If you want to modify the code, move on to step 3.

Step 3: Understanding the Code €“ Preparation

Processing is a very simple programming language for non-programmers (as I am). I have learnt it 2 weeks ago, so understanding and modifying the code should be no problem for anyone.

But first I would like to tell you what I did before writing the code.
The screens you see below are all the used images. The 16 pictures on the left are the horizontal lid movement and the other 5 broad pictures are the blinking routine. The big picture is the background and behind it, there are pupil, iris, eye-background.
Knocking out the background from each frame and making them look fluent in motion is a kind of work that doesn’t resemble the fun I had imagined at all. So the plan to make 2 faces, boy and girl, was rejected out of hand.
The movement pictures had been extracted from a movie I made from the eyes, and the big screen is a photo. This is what makes it a cinemagram: The fact that everything stays perfectly in its place except the objects that move. Taking out the secondary actions, like tiny movements of the eyebrows, confuses and fascinates, making the brain ask you whether you would call it a photo or a video.
Summed up, there are 5 layers; 3 for the eyes and 2 for the face.

Step 4: Understanding the Code €“ OSC

I will not explain how you write a working Processing code from scratch, there are better tutorials for that. What I will do is explain the code to people that are familiar with Processing already, but if you only want to change variables, you don’t have to be able to understand it completely.
In the code, I commented where the different areas are.
Everything before SETUP is just importing libraries and setting up variables.


Important: When you want to use OSC with the original file out of Processing, you have to install the oscP5 library.

The OSC part is at the end of the code:

void oscEvent(OscMessage touchField) {

  String addr = touchField.addrPattern();
  float val = touchField.get(0).floatValue();
  if (addr.equals("/1/fader2")){ sensitivity = val; }
  if (addr.equals("/1/toggle2")){ mode2f = val; }

  float xValue = touchField.get(0).floatValue();
  float yValue = touchField.get(1).floatValue();

  xWert = xValue;
  yWert = yValue;
}

All that I do is look for everything that changes on the iPhone on TouchOSC. If the sensitivity-fader is changed, it gives its value to the variable sensitivity, if the button is pressed, it gives its value to mode2f, a variable that can only be 0 and 1.
The x- and y-Value of the slidepad are given to the variables xWert and yWert, which are directly added to the position of the eyes. The slidepad on the iPhone gives values from -80 to +80, that had been defined in the OSC-file.

Step 5: Understanding the Code €“ Eye Movement

When the eyes change their position, it has to look like they are perspectively correct, this is why I fake a three dimensional mapping of the pupil and the iris. All I do is change their width on the x-position, make them less broad when looking to the side, and the same on the y-position. When looking to the side, the shape is 50% as broad as when looking straight. In the code, it looks like this:

// EYE BACKGROUND
  image (auge3, width/2-240+xWert, height/2+yWert*0.8, 450, 450);
  image (auge3, width/2+250+xWert, height/2+yWert*0.8, 450, 450);


// IRIS
    breite = -0.00005*sq(xWert)+1;
    hoehe = -0.000008*sq(yWert)+1;

  if (mode2 == true) { 
    tint (255-ton3*30*sensitivity,255-ton2*50*sensitivity,255-ton1*50*sensitivity);
    ton1s = map (ton1s,1,1.2, 1,1.04);
    ton3s = map (ton3s,1,2, 1,1.1);
  }

  image (auge2, width/2-240+xWert, height/2+yWert*0.8, breite*auge2r_sizeX*ton1s, hoehe*auge2r_sizeY*ton1s);

if (mode2 == true) { 
    tint (255-ton4*0*sensitivity,255-ton1*50*sensitivity,255-ton2*50*sensitivity);
    ton2s = map (ton2s,1,1.2, 1,1.04);
    ton4s = map (ton4s,1,2, 1,1.1);
  }

  image (auge2, width/2+250+xWert, height/2+yWert*0.8, breite*auge2r_sizeX*ton2s, hoehe*auge2r_sizeY*ton2s);

  if (mode2 == true) { 
    tint (255,255,255);
  }

  // PUPIL
  image (auge1, width/2-240+xWert, height/2+yWert*0.8, breite*auge1l_sizeX*ton3s, hoehe*auge1l_sizeY*ton3s);
  image (auge1, width/2+250+xWert, height/2+yWert*0.8, breite*auge1r_sizeX*ton4s, hoehe*auge1r_sizeY*ton4s);

The variables breite and hoehe define the distortion to provide a 3D-effect. The mathematical function is explained in pic 2. (I made a mistake first, had the curve mirrored vertically, don’t wonder)

Mode2 can be true or false, this is Color or Size Mode.
In Color Mode, it changes the red, green, blue values of the tint of the iris and additionally reduces ton1s, ton2s, ton3s and ton4s, the variables to change the size of the eyes. So, in color mode, the size changes are quite subtile.

Step 6: Understanding the Code €“ the Lid

When you move your eyes up and down, it would look very strange if you didn’t move your lid. So I made 17 frames of the lid for different vertical positions of the eye, creating a more or less fluid motion. On the picture, you can see when the frames show up (-100 to +100 are the pixel values of the height)

// LID
   if (yWert <100) {lidZahl = a17;}
   if (yWert <90) {lidZahl = a16;}
   if (yWert <78) {lidZahl = a15;}
   if (yWert <66) {lidZahl = a14;}
   if (yWert <54) {lidZahl = a13;}
   if (yWert <42) {lidZahl = a12;}
   if (yWert <30) {lidZahl = a11;}
   if (yWert <18) {lidZahl = a10;}
   if (yWert <6) {lidZahl = a9;}
   if (yWert <-6) {lidZahl = a8;}
   if (yWert <-18) {lidZahl = a7;}
   if (yWert <-30) {lidZahl = a6;}
   if (yWert <-42) {lidZahl = a5;}
   if (yWert <-54) {lidZahl = a4;}
   if (yWert <-66) {lidZahl = a3;}
   if (yWert <-78) {lidZahl = a2;}
   if (yWert <-90) {lidZahl = a1;}

  if (blinzelt == false) { 
  image (lidZahl, width/2, height/2+12);}

...And in the end I had to make sure she isn’t blinking while showing the right image. 

Step 7: Understanding the Code €“ Blinking

There are 2 reasons to blink:
When you just have to blink ( once per 4 seconds, on average)
or when you hear a loud noise.

The first event is called by a random generator. It creates a random number between 0 and 100, and if the number is greater than 99, she blinks. With a framerate of 25, this happens every 4 seconds on average.
Then, it is checked whether the horizontal eye position is below a specific point, because then you need one frame less for the blinking animation. To avoid two blinks at the same time, blinzelt goes true, which means there are no random numbers generated.
The long row of ifs is the frame animation that checks what frame has just been shown, and sets blinzelt to false after the animation has finished.

if (blinzelt == false) {
  if (blinzelGenerator <99) {
    blinzelGenerator = random (100);
  }
  else {
    blinzelt = true;}}

  if(blinzelt == true) {

  if (blinzeln == b6){blinzeln = b2;} 
  if (blinzeln == b5a){blinzeln = b2;}
  if (blinzeln == b5){blinzeln = b6;blinzelt = false; blinzelGenerator = 1;}
  if (blinzeln == b4){
    if (yWert < 40) {
    blinzeln = b5;}
    else {
    blinzeln = b5a; blinzelt = false; blinzelGenerator = 1;}
  }
  if (blinzeln == b3){blinzeln = b4;}
  if (blinzeln == b2){blinzeln = b3;}
  if (blinzeln == b1){blinzeln = b2;}

  image (blinzeln, width/2, height/2+12);
  }


In the // EQUALIZER you also find:

if (nichtBlinzeln > 120) {
    nichtBlinzeln = 1;
  }

  println(nichtBlinzeln);

  if (nichtBlinzeln < 1.5) {
  if (ton1s > 2) {
    blinzelGenerator = 99.5;
    nichtBlinzeln = 100;
  }
  }

  if (nichtBlinzeln > 99) {
  nichtBlinzeln = nichtBlinzeln + 1;
  }

That is the second reason to blink, because ton1s (a low frequency) goes over 2 (a high value). Then, the blinking generator gets a 99.5, which means, it should blink, and a value called nichtBlinzeln counts from 100 to 120 (0.8 seconds) to reset it. This avoids blinking the whole time when it is very loud.
In case you did read the code, you may have noticed that I didn’t use all the blinking frames in the setup, I used some of the up-and-down-motion. This is just because it looks more fluent.

Step 8: Understanding the Code €“ Sound

I took this part from an example of the minim-library, which analyzes the sound coming from or into the PC. The library comes with Processing, there is no need to download it.

fft.forward(out.mix); 
  ton1 = fft.getBand(5)*sensitivity;
  ton2 = fft.getBand(15)*sensitivity;
  ton3 = fft.getBand(30)*sensitivity;
  ton4 = fft.getBand(40)*sensitivity;

  //println (ton1 + " " + ton2 + " " + ton3 + " " + ton4);

  ton1s = map (ton1, 0,2, 1,1.2);
  ton2s = map (ton2, 0,3, 1,1.2);
  ton3s = map (ton3, 0,3, 1,2);
  ton4s = map (ton4, 0,5, 1,2);

I don’t do very much, I just read different frequencies (5 / 15 / 30 / 40, but don’t ask what frequencies these are representing) and convert them with the map-function into values that make the eyes grow. These values go from 1 to 1.2 with normal sound volume and are multiplied with the normal sizes of the parts.

Step 9: Understanding the Code €“ Keyboard and Mouse Control

This is a function I added after I finished the project, because there are probably a lot of people that won’t use TouchOSC.

The first step, analyzing the button pressed, comes at the end of the code. The darstellungA, -B, -C variables call a text that is shown on the screen.

// K E Y B O A R D
void keyPressed () {
   if (key == ' '){
     if (mode2f > 0){
       mode2f= 0;
       darstellungA = 10;
       darstellungB = 0;
       darstellungC = 0;
   }
     else {
       mode2f = 1;
       darstellungB = 10;
       darstellungA = 0;
       darstellungC = 0;
     }}


   if (key == '+') {
     sensitivity = sensitivity + 0.1;
     darstellungC = 10;
     darstellungB = 0;
     darstellungA = 0;
   }
   if (key == '-') {
     if (sensitivity > 0) {
       sensitivity = sensitivity -0.1;
       darstellungC = 10;
       darstellungB = 0;
       darstellungA = 0;}
   }
}


...And the mouse control and text displaying are in the middle of the draw-routine.
The if-bracket before the function that defines the mouse as the xWert, which is normally defined by TouchOSC, checks whether the mouse has just been moved and allows using mouse and TouchOSC control in the same app.
The text-part displays the mode or the sensitivity after overwriting the old text and counts to 30 while showing the text, making sure it stays there for about one second. If you don’t want the text to be shown you can just delete it.


// MOUSE CONTROL, TEXT 
  if (mouseXOld != mouseX) {
  xWert = map (mouseX, 0, width, -100, 100);}

  if (mouseYOld != mouseY) {
  yWert = map (mouseY, 0, height, -100, 100);}

  mouseXOld = mouseX;
  mouseYOld = mouseY;

  textSize(40);
  textAlign(CENTER);

   if (darstellungA > 5) {
   if (darstellungB < 1) {
   if (darstellungC < 1) { 
   darstellungA = darstellungA +1;
   text("Color Mode", width/2, height-50);
   }}}
   if (darstellungA > 30) {
   darstellungA = 0;
   }

   if (darstellungB > 5) {
   if (darstellungA < 1) {
   if (darstellungC < 1) {
   darstellungB = darstellungB +1;
   text("Size Mode", width/2, height-50);
   }}}
   if (darstellungB > 30) {
   darstellungB = 0;
   } 

   if (darstellungC > 5) {
   if (darstellungA < 1) {
   if (darstellungB < 1) {
   darstellungC = darstellungC +1;
   rsensitivity = round(sensitivity*10);
   text("Sensitivity: " + rsensitivity, width/2, height-50);
   }}}
   if (darstellungC > 30) {
   darstellungC = 0;
   }


DIY Audio

Participated in the
DIY Audio

Instructables Design Competition

Participated in the
Instructables Design Competition

The Photography Contest

Participated in the
The Photography Contest