Projection Mapped Audio Visualization




Introduction: Projection Mapped Audio Visualization

About: Cape Town based maker and Geoinformatics student

I'm throwing a party, and since the party is meant to be a bit of an AV experience, I'm trying to design and add as many AV elements as possible. This is one of them - an audio visualization using 3D projection mapping. The idea is pretty simple: have a set of boxes in a set layout - each box represents a band of the audio spectrum. So when a bass note plays, the bass box lights up, and the same for the rest of the frequencies.

So the route I've chosen is as follows:

Sound source (my laptop) > sound analysis (processing) > broadcast sound analysis data (processing & OSC) > receive data and projection map accordingly (VPT)

I hope that doesn't sound too confusing. There are easier ways, but this allows for quite a large degree of flexibility and loads of room to add on other cool bits.

Shall we get stuck in, then?

Teacher Notes

Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.

Step 1: Bill of Materials

Not much is really needed for this instructable, but here's a list:

- A laptop (a desktop is possible, but portability helps...)
- A projector (size and power is up to you)
- Boxes or something you want to project onto*
- White (spray) paint
- Microphone (optional)

- Processing (as well as controlP5, netP5 and oscP5 libraries)
- VPT (great free projection mapping tool. I'm using version 7. Link: )
- Something to play your tunes (iTunes?)

*Note on the boxes: I went down to my local supermarket loading depot and found a huge selection of boxes for free. Give it a go and be a bit more green :)

Step 2: Audio Meets Processing (then OSC)

Firstly, you'll need an audio source to process. This can be from a microphone, a song, etc. Since the music would be playing from my laptop, I wanted to analyse this source. This means activating and selecting "Stereo Mix" as the default recording device on my sound panel. A bit of googling on how to do that saves me having to go through it. It's pretty easy to do.

Now that you have an audio source, we need to dive into the code:

3D projection mapping with VPT via OSC
Nic Shackle
Falls under Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
April 2014

import oscP5.*;
import netP5.*;
import controlP5.*;

ControlP5 cp5;
Knob gain;
int gainVal;

OscP5 oscP5;
NetAddress myRemoteLocation;

import ddf.minim.analysis.*;
import ddf.minim.*;

Minim       minim;
AudioInput jingle;
FFT         fft;

void setup()
  size(450, 300);

  cp5 = new ControlP5(this);

  oscP5 = new OscP5(this,6666);
  myRemoteLocation = new NetAddress("",6666);

  minim = new Minim(this);

  jingle = minim.getLineIn();
  fft = new FFT( jingle.bufferSize(), jingle.sampleRate() );
  fft.logAverages(86, 1);

int scale = 2; //change for sensitivity overall
boolean FFTon=false;
String viewOSC; //for showing OSC stream. used in OSC tab.

void draw()
  text("Nic Shackle",170,35);
  // perform a forward FFT on the samples in the buffer
  fft.forward( jingle.mix );
  if(FFTon)analyseAndSend(); //if toggled, broadcast FFT values via OSC


void placeButtons(){
  gain = cp5.addKnob("gainVal")


void knob(int gainVal) {


public void controlEvent(ControlEvent theEvent) {

public void Toggle_FFT_broadcast(int theValue) {
  FFTon= !FFTon;

void send(String path,float val){
  OscMessage myMessage = new OscMessage(path);

  /* send the message */
  oscP5.send(myMessage, myRemoteLocation);
// println(myMessage + " Sent");
  viewOSC="OSC stream: "+myMessage;


void analyseAndSend(){
  //the following "sends" are if you're using multi-sided objects that require three faces to show the same thing
//  //Three faces of the "band 1" box
//  send("/" + str(1) + "layer/fade",fft.getAvg(1)/100*gainVal);
//  send("/" + str(2) + "layer/fade",fft.getAvg(1)/100*gainVal);
//  send("/" + str(3) + "layer/fade",fft.getAvg(1)/100*gainVal);
//  //Three faces of band 2 box
//  send("/" + str(4) + "layer/fade",fft.getAvg(2)*2/100*gainVal);
//  send("/" + str(5) + "layer/fade",fft.getAvg(2)*2/100*gainVal);
//  send("/" + str(6) + "layer/fade",fft.getAvg(2)*2/100*gainVal);
//  //Three faces of band 3 box
//  send("/" + str(7) + "layer/fade",fft.getAvg(3)*3/100*gainVal);
//  send("/" + str(8) + "layer/fade",fft.getAvg(3)*3/100*gainVal);
//  send("/" + str(9) + "layer/fade",fft.getAvg(3)*3/100*gainVal);
//  //Three faces of band 4 box
//  send("/" + str(10) + "layer/fade",fft.getAvg(4)*4/100*gainVal);
//  send("/" + str(11) + "layer/fade",fft.getAvg(4)*4/100*gainVal);
//  send("/" + str(12) + "layer/fade",fft.getAvg(4)*4/100*gainVal);
//  //Three faces of band 5 box
//  send("/" + str(13) + "layer/fade",fft.getAvg(5)*5/100*gainVal);
//  send("/" + str(14) + "layer/fade",fft.getAvg(5)*5/100*gainVal);
//  send("/" + str(15) + "layer/fade",fft.getAvg(5)*5/100*gainVal);
//  //Three faces of band 6 box
//  send("/" + str(16) + "layer/fade",fft.getAvg(6)*6/100*gainVal);
//  send("/" + str(17) + "layer/fade",fft.getAvg(6)*6/100*gainVal);
//  send("/" + str(18) + "layer/fade",fft.getAvg(6)*6/100*gainVal);
//  //Three faces of band 7 box
//  send("/" + str(19) + "layer/fade",fft.getAvg(7)*8/100*gainVal);
//  send("/" + str(20) + "layer/fade",fft.getAvg(7)*8/100*gainVal);
//  send("/" + str(21) + "layer/fade",fft.getAvg(7)*8/100*gainVal);

//the following "sends" are if you're using single-sided objects that require only one value sent
  for(int i = 0; i < 9; i++) //iterate through the bands

    if(i==8 || i==9){send("/" + str(i) + "layer/fade",(fft.getAvg(i)*i/100)*gainVal*2);} //trebles need a bit of an oomph to show up nicely
    else{send("/" + str(i) + "layer/fade",(fft.getAvg(i)*i/100)*gainVal);}
    text("Band/Layer "+str(i),50,104+i*10);

Did you get all that? 

A quick run through:
Audio is loaded into a buffer. That buffer is FFTed and averaged into 9 bands. Those bands are iterated through to get their value with a range of about 0 to 1 (hence a floating point value). That value is concatenated with a string that corresponds to an OSC command destined for VPT. That string is then broadcast over OSC on IP address (port number 6666). There's also a GUI that shows the frequencies for each band, and a Gain knob, which can boost the signal if your source is a bit soft. 

Note: I've never worked with audio analysis before this, so I admit my algorithm for getting an accurate spectrum is probably a little off. If anyone is more clued up on this, I'd love to hear a better way to go about it (I have a feeling it's with some tasty maths!)

Step 3: OSC Meets VPT

Now that we have our data up in the OSC air, we're going to catch it in VPT.

Install VPT and learn how to use it using the very helpful guide included. Also be sure you're comfortable with the method of saving your work - I learnt this the hard way a couple of times...

Start off by adding layers and assigning sources to them. If you are using boxes to project onto, you'll need three layers per box (and also to change the processing code to send each band three times - this is currently in the code but commented out). If you are using single sided objects, you can leave the code as is and create 6 layers in VPT (I know there are 9 bands, hence there should be 9 layers, but as you may notice, band 0 (almost subsonic bass) is multiplied by 0, and the very upper bands aren't really worth looking at).

As you may have guessed, the effect is created by fading each layer according to the level of the band linked to that layer. This means you can assign any source to the layer and still get the same effect. I even ran it using the webcam as a source out of curiosity (an easy way to get your processor in quite a fluster).

Step 4: VPT Meets Terra Firma

Now that you have your layers fading nicely, power up your projector and start moving the layer handles to fit the objects you're projecting onto. Hint: enter fullscreen mode, drag the corners, then press tab to move to the next layer.

I found setting my projector to "extend display", dragging VPT windows onto the extended display and entering fullscreen mode meant I could keep my laptop screen free for the processing app and media player.

Here's an extremely short clip with an extremely bad camera showing the result of a test with one box:

Step 5: To Do...

I wouldn't quite call this project finished, since I still have some refinements and qualms:

- Projecting onto boxes that are stacked on/next to each other means you end up projecting one layer on to two boxes on the faces where they meet. You can get around this by creating a mask for each layer, but this is almost impossibly fiddly. It's a lot easier to project onto isolated boxes or 2D objects such as boards etc.

- I'm not entirely happy with the abruptness of the fading. I will need to add some sort of "easing" algorithm into the processing code to make a smoother show.

- I need to paint the boxes white.

- Improved spectrum analysis?

Full Spectrum Laser Contest

Participated in the
Full Spectrum Laser Contest

Be the First to Share


    • LED Strip Speed Challenge

      LED Strip Speed Challenge
    • Sculpting Challenge

      Sculpting Challenge
    • Clocks Contest

      Clocks Contest