Why would we do this? Well, the UP festival focuses on quick ideas to make cities better through constant engagement, and we (that is Emily Shisko and Shane Myrbeck) thought that we could create an engaging piece that explored information about San Francisco through a different sensory modality than we're used to. Certain aspects of perception of sound, like the ability to experience patterns rather than just see them, the detailed temporal resolution, its immersive nature if presented properly, etc, make it an interesting method of conveying information. Hopefully, the result was a unique and engaging experiment!
Step 1: What Do You Want to Hear?
-A database of trees planted in San Francisco since 1981
-Tide time/location and water temperature for the month of June 2012
-Language demographics from the 2010 census
-Wind data at 4 different locations over some months of 2012
We chose these four datasets because they represented two human-centered subjects - trees planted and the languages we speak, representing what we want our city to be and who we are, respectively. Tides and wind are natural datasets, although as with everything in the urban environment, we affect those too...
We decided that although the different pieces of information represented very different timescales (from static to 30 yrs), we would present each for 3 minutes at a time. This decision was made to exploit an inherently interesting aspect of presenting data through sound, that the our sensitivity to time as listeners is naturally acute, and also that time can be expanded and contracted with sound in a way that is difficult to do visually.
Have fun choosing your datasets -- this is what will make your piece cool!
Step 2: What Will Make the Sounds?
This portion will be entirely determined by budget. We originally had designed system that was cheap but that would suit the basic needs of the project. We ended up lucking out, and some friends at Harmon Int'l (owner of JBL loudspeakers) lent us 20 Control 25 AVs with 24 channels of Crown amplification. We used audio interfaces we had from previous projects.
The final signal chain went:
Laptop w/ Max/MSP ->
aggregate device audio interface (Presonus Fireface + MOTU ultralite = 20 channels out) ->
3 Crown CT8150 ->
19 JBL Control 25 AVs
Step 3: From Data - Sound
It is difficult to explain this step in greater detail. I ran into challenges and stumbling blocks that were in a large part unique to the specific data set I chose and the sounds I was trying to make with them. Because these elements will vary greatly depending on what you might choose to sonify, specific instruction become difficult. That said, I've added some screenshots of my code (using Max/MSP) below to give an idea.
Step 4: Composition
Audio examples can be found here: