Introduction: The Animation Station
Live visuals are the icing on the cake for a lot of live performances. The issue I usually have with them is that they tend to feel sterile and CG. With the animation station, I'm able to create unique, complex visuals with direct control, while maintaining a human touch.
This project is a rig for capturing, manipulating, and projecting live video clips for a stage performance. Using a miniature greenscreen syke, a few webcams, some windup toys, and a pile of 2D shapes, I'm able to capture video clips, play them back on a loop instantly, and key out the green background to composite them for projection during a live performance.
I used a Dell Precision M3800 Workstation running Max7 for this project. The Dell, featuring the fast Intel Core i7 processor, is a seriously powerful laptop with a 4th gen Intel® Core™ i7 processor, which is crucial for processing and manipulating HD video instantly. If you're not familiar with Max/MSP/Jitter, it's a visual programming language that processes numbers, signals (like audio), and matrices (like video). The interface is super intuitive, consisting of an array of boxes and cables, making it easy to keep track of what's going on with the program.
This instructable includes detailed descriptions of the Max patches, as well as the hardware side of the Animation Station.
Step 1: Tools & Materials
COMPUTING
- Dell Precision M3800 Workstation:
- Intel® Core™ i7 processor
- NVIDIA® Quadro K1100M graphics with 2GB RAM
- 4K Ultra HD Touchscreen
- Max7
- Logitech Webcam C930e (3)
POWER & LIGHTING
CASE HARDWARE
- Handlebar Camera Mount (2)
- Plastic Coated Steel Tubes
- Clamp-on Framing Fitting (2)
- Draw Latch (6)
- Folding Pull Handle
- Various machine screws and locknuts
- Lasercut plywood parts: 1/4" thick and 1/2" thick
- Matte Green Poster Board (for syke)
OBJECTS FOR VIDEO CLIPS
- Kikkerland Windup Toys
- Lasercut 2D cardboard pieces, custom
Step 2: Designing the Station
I designed the station to be what's typically called a "coffin case", meaning all the equipment is attached to the bottom half of the case, and the lid can be removed altogether. I used Fusion 360 to design the whole project because it's a powerful modeling program it it's easy to output 2D geometry from a 3D object for laser cutting.
- The .f3d file is the Fusion 360 file with all the parts modeled.
- The .dwg files are the 2D cut files that are laid out and ready for laser cutting.
Most of the hardware came from McMaster Carr, which provides highly accurate, to-scale 3D STEP files that you can use natively in Fusion. This makes it super easy to design around their hardware.
Step 3: Assembly
The Lasercut parts go together pretty simply: nothing but wood glue and some finishing nails with a nail gun. The tolerances are dialed in, so there's no need to clamp the parts together. Just dab some glue in to the connections and shoot in a few nails. All the measurements are in the Fusion 360 file, so it's easy to follow along and assemble all the parts. The only other fabrication that needs to happen is cutting the pipes to size.
The first camera is bolted into the floor of the case, pointing towards the syke's vertical surface: this one captures head-on video of 3D objects.
The second camera is mounted to the top rail of the swiveling arm: this one points down at the floor of the green screen syke. It attaches to the rail using the handlebar mount mentioned in the Tools & Materials step.
The third camera is also on the top rail of the swiveling arm: this one points toward the performer. It also attaches to the rail using the handlebar mount mentioned in the Tools & Materials step.
The green screen syke fits into the slots on either side of the first camera, then butts up against the hooks at the top of the folding backdrop frame- this keeps it wrinkle free and even.
The LED snake lights plug into the holes on the top surface of the case, and all the USB cables needed for the cameras, lights, and midi controller are routed through the cable chases under the surface- cleanliness is next to godliness.
Step 4: 2D and 3D Objects
For this project, I decided to use a handful of Kikkerland windup toys for the 3D objects, and some custom laser cut cardboard pieces for the 2D objects.
The cardboard pieces are a bunch of squares, right triangles, pedestals, and a perspective grid. I thought I could get some pretty cool juxtapositions with these- creating the effect that the windup toys are skittering around in an an abstract, physical landscape. It worked really well for the performance!
Step 5: Max7: Intro
If you're not familiar with Max7, there are a few things you should know off the bat.
- Max files and captured video files should live somewhere within the "Global File Library" as specified by the program. To find out where this is, just go to "File Preferences" in the menu. All the max files available for download in this instructable should go in this folder and be accessed from there.
- The values you adjust while you're working, such as the speed of a clip, or the clip that's loaded, or the dimensions of a matrix, will not be there when you reload the file, even if you save the file in that state. The only way to recall adjusted parameters like this is to use the loadbang object.
- Any object you see in any of the patches has a "help" file associated with it. To access this file, just right-click on an object and select"Open
Step 6: Loader
A loader patcher is a great way to keep track of the set of patchers that make up your project. I'm sure Max7 has some clever ways of keeping your projects organized, but this is the way I've been doing it for over 10 years, and it works for me!
- To create a loader window, I start with a button, which allows me to open all the windows with one bang. The button'soutlet connects to the message's inlet.
- Next, I create a series of messages. Each message contains the string "load
". There are many file types max can open, but ".maxpat" is the most common. Each message's outlet connects to the pcontrol inlet. - Lastly, the pcontrol object is used to open the file. pcontrol is an object that opens and closes sub-windows, and also serves as a remote control for certain functions (such as enable or disabling MIDI).
Attachments
Step 7: MidiKontrol
I used a Korg NanoKontrol as a tactile interface for this project. It's a simple USB I/O control surface with pots, sliders, latching buttons, and a record/play/<> section.
The midiKontrol patcher is very simple. Each MIDI output signal from the controller is routed to max using the ctlin object. Each button, slider, and pot on the surface is mapped to a unique MIDI channel, so it's easy to route the output.
Each MIDI channel is mapped to a number object and its Max interface equivalent (button for button, dial for pot, slider for slider, etc.) so that it's easy to keep track of the data coming in from the controller. I set up this patch so that it graphically matches the NanoKontrol's design- this helps to ensure that the buttons, sliders, and pots are mapped properly.
Each MIDI output is mapped to a unique send object that can be received in any other patcher in Max.
Attachments
Step 8: GrabWebcam: Webcam Input and Output
grabWebcam is a patcher that (you guessed it) grabs the video from a webcam. It selects a webcam from a list, opens the webcam, sets the dimensions of the input video, and sends usable video data to other patchers within max.
The notated image here explains all the parts, but the whole thing is centered on jit.qt.grab. This objet digitizes video from any external source that's hooked up to Max. In the case of the animation station, I'm using one of three webcams at a time:
- Logitech Webcam c930e (1): Webcam attached to the floor of the case, pointed towards the vertical surface of the syke. This one is for capturing 3D objects. For Scarth's performance, I used windup toys.
- Logitech Webcam c930e (2): Webcam attached to the upper bar, pointed down at the floor of the syke. This camera is mainly for 2D objects (like the cardboard cutouts I used for Scarth's performance), but you can capture 3D objects in a top-down view too.
- Logitech Webcam c930e (3): Webcam attached to the upper bar, pointed towards the performer. Sometimes it makes sense to capture the person(s) on stage and project that footage. It can become an interesting part of the composition and also serve to show the performer(s) to the audience in the back of the room.
To send video to the clip grabber, click the getinputlist message, the getdevlist message, turn on the qmetro, select a camera from the umenu, and click the open message. This series of steps will capture video from the selected camera (default size 720 X 480), and send it to the jit.window and the cameraOne output for use in other patchers.
I designed this project so that this patcher will switch between video inputs instead of having all three open as inputs simultaneously. The Dell Precision M3800 Workstation is powerful enough to have all three open at once without creating any lag, and this might be useful if the clips being created require instant switching from live input to recorded, for example.
Attachments
Step 9: ClipControl: Write, Read, and Control Captured Clips
Most of the technical explanation for this patcher is on the description image. I've attached a PDF of the description image in case it's hard to read. Here's a list of steps to make this patch work:
- Click the "select folder" button. This will prompt you to select a folder to write and read the clip to. The "write to disk" sub patcher I suggest using a separate folder for each clip because it keeps things tidy and will allow you to use clips you liked at a later time for mixing.
- Switch on the "write to disk" toggle. This will capture the clip as a series of Max7 native .jxf frames. When this toggle is on, the video is simultaneously being written and read, meaning whatever you're capturing can be sent to the final mix. This makes it easy to have a seamless loop-station effect. When this sub-patch is writing to disk, it will automatically over-wright any frames stored in the folder, which is another reason to make a series of separate folders.
- Switch off the "write to disk" toggle. This will stop the recording and send the duration of the clip to the playback controller so that you can adjust the length of the clip quickly and easily.
- Switch on the "PLAY (read)" toggle. This will playback the clip that was just captured. You can also select another folder to read and it will playback from it.
- Use the TRIMMER and SPEED CONTROL: These sub-patchers allow for direct control over speed, duration, begin and end points, and playback type as described in the image from this step. These sub-patchers are connected to the MIDI controller patcher, allowing for more precise tactile control of the playback.
NOTE: It might be advisable to make a switch within this patcher that automatically switches on the "PLAY (read)" toggle when the "write to disk" toggle is switched off. This way, the clip that was just written would instantly start playing back.
Because of the Dell Precision M3800 Workstation's powerful specs, this patch can capture and manipulate the video with instant, direct control. I didn't experience any lag time with trimming or speed changes o the fly, which is a huge bonus when you're doing live visuals.
Step 10: ClipPlayer: Play Back a Second Clip for Mixing
This patcher is basically the same as the clipControl patcher, except it doesn't have the write function. It allows you to playback and control a second clip for compositing using the chromaKey patcher. The functions are described in detail in the description image for this step, so here are the steps for using it:
- Click the "select folder" button. This will prompt you to select a folder to write and read the clip to. The "write to disk" sub patcher I suggest using a separate folder for each clip because it keeps things tidy and will allow you to use clips you liked at a later time for mixing.
- Switch on the "PLAY (read)" toggle. This will playback the clip that was just captured. You can also select another folder to read and it will playback from it.
- Use the TRIMMER and SPEED CONTROL: These sub-patchers allow for direct control over speed, duration, begin and end points, and playback type as described in the image from this step. These sub-patchers are connected to the MIDI controller patcher, allowing for more precise tactile control of the playback.
Attachments
Step 11: DigiNoise: Background and Texture
This patcher is based on the jit.noise object. This object creates a matrix with random values resulting in a sort of white noise effect, like static on a CRT television. It's great for live visuals because it ads a layer of texture to the background, and brings some depth to the composition.
The object does most of the work for you, so I just added controls for the number of columns and rows in the matrix and mapped them to the last two sliders on my MIDI controller. If you turn the "rows" value down to one, you get one row, meaning the matrix is a series of vertical stripes. Setting "columns" to one will give you horizontal stripes.
This patcher has an output called "noiseOut" so it can be sent to the final mix in the chromaKey patcher.
Attachments
Step 12: ChromaKey: Mix the Captured Clips
The green miniature syke provides a (more or less) consistent background color that's selectable with the chromaKey patcher. Lighting is crucial here, and I think I might need to study this a little further. I ended up with a lot more shadow that I had hoped with the two USB LED lamps, and I got some reflection on the shiny metal of the windup toys. This means that when the tolerance is turned up, you start to lose parts of the image you want to keep. With more control over the lighting and careful choice of object color and reflectivity, I'm sure I could get much better results.
The patcher is explained pretty extensively in the description image. Basically, there are two levels of chromakey. Level 1 composites two captured video clips which are sent from the clipControl and clipPlayer patchers. This level outputs a composite of two clips to Level 2, which in turn allows you to key out the green background from the composite so that the noise matrix becomes the overall background.
By selecting colors in the swatch objects and adjusting the tolerances, you're able to dial in the chromakey effect with a reasonable level of precision.
The edges of the keyed out pixels tend to be a little rough, I'd love to hear if anyone has suggestions on how to make these smoother.
Attachments
Step 13: BlurVideo: Adjust the Quality, Color, and Blur the Final Video
This patcher takes the output of chromKey and runs it through a final series of effects before it's projected. With this patcher, you can swap the output from camera input (from whichever camera is selected in grabWebcam) to movie input (from chromaKey).
This patcher came mostly from a great tutorial on the Cycling 74 site about video processing, I highly recommend it as a useful resource for patchers and as a way to better understand how Max processes video. https://cycling74.com/2008/12/22/the-video-process...
Here are some of the effects I've included in this patcher:
- easy-xfade lets you fade the video to black or cross-fade it with the camera input using an extensive list of blend operations (just like blend modes in Photoshop).
- gausian-blur creates a smooth blurring effect on the final video output. This can soften the jagged edges of the chromakeyed video on the low end of the slider, or create a foggy, color shifting atmosphere on the high end.
- The color-scale object allows you to adjust the color balance of the final video using R G B channels.
- The brightness-graph allows you to adjust the brightness of the video along the color spectrum.
The master-context object controls the videoplane object, which is the final output for the projection. This window keeps track of the frame rate and allows you to choose the resolution of the final projection.
To project the final video, simply move the videoplane window (called "mod") to the secondary desktop (the projector) and hit the "escape" key. This is the maximize/minimize shortcut. This shortcut can be mapped to any key or a button on the MIDI controller, but "escape" is the most commonly used key.
NOTE: This step is particularly taxing and requires a powerful laptop like the Dell Precision M3800 Workstation with Intel Core i7 processor I used. You may want to consider cutting out this step, or at least bypass the blur effect if you don’t have a higher end system available.
Attachments
Step 14: Results
The final product fit the performance perfectly. Scarth's music has a playful, human touch while still being technically sophisticated. The windup toys and 2D pieces fit the music perfectly, and the digiNoise background really gave it that 80's children's television aesthetic.
For other bands in the past, such as Macrosick in the 00's, I've used either found footage, footage I filmed myself, or footage I produced in CG. The use of live clips in real time is something I've always wanted to do live, but every time I tried it the laptop I was using just couldn't handle creating and manipulating clips instantly and seamlessly- never enough ram, or enough processing power, or a good enough graphics card, or some combination of the three.
I was blown away by the smooth, seamless, high def video output that the Dell Precision M3800 Workstation could handle. I ran multiple cameras, cached many folders of video, mixed, affected, and manipulated the clips for hours, all at 1280p, and the Dell didn't miss a beat.
The touch screen is also an awesome feature for this project. When you're mixing live video to synch with the music, every second counts. Being able to just touch a toggle or turn up a value by tapping and dragging can save you in a pinch- using a mouse to navigate can be excruciatingly slow when you're trying to get the video to work within a 2 minute window.
FAIR WARNING: This project will probably freeze, crash, or at least bog down a lower end laptop. I recommend sticking to a lower resolution (around 640 x 480) if not using something as powerful as the Intel Core i7 based Dell system I used.