Sonifying Your City: Making Good Fences Make Good Neighborhoods



Introduction: Sonifying Your City: Making Good Fences Make Good Neighborhoods

The following will describe how we created the piece Good Fences Make Good Neighborhoods for the San Francisco Urban Prototyping festival.  This piece was a data sonification of various pieces of information about the city.  What is a data sonification?  That's when you interpret data through sound rather than visuals.  For instance, an EKG or a fire alarm is a data sonification of sorts.  We wanted to make something that sounded cool, though, so we took some data about San Francisco that was compelling and available and ran it through our compositional filters.  

Why would we do this?  Well, the UP festival focuses on quick ideas to make cities better through constant engagement, and we (that is Emily Shisko and Shane Myrbeck) thought that we could create an engaging piece that explored information about San Francisco through a different sensory modality than we're used to.  Certain aspects of perception of sound, like the ability to experience patterns rather than just see them, the detailed temporal resolution, its immersive nature if presented properly, etc, make it an interesting method of conveying information.  Hopefully, the result was a unique and engaging experiment! 

Teacher Notes

Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.

Step 1: What Do You Want to Hear?

The first step is to choose what type of data you want to interpret through sound.  This will be a combination of what you think is interesting and what is available.  Your city will always have databases of information that are accessible to you, but the question is then if you can get it in (or get it into) a format which is malleable given your chosen sonifying tool.  For our piece, we chose:

-A database of trees planted in San Francisco since 1981 
-Tide time/location and water temperature for the month of June 2012
-Language demographics from the 2010 census 
-Wind data at 4 different locations over some months of 2012

We chose these four datasets because they represented two human-centered subjects - trees planted and the languages we speak, representing what we want our city to be and who we are, respectively.  Tides and wind are natural datasets, although as with everything in the urban environment, we affect those too...

We decided that although the different pieces of information represented very different timescales (from static to 30 yrs), we would present each for 3 minutes at a time.  This decision was made to exploit an inherently interesting aspect of presenting data through sound, that the our sensitivity to time as listeners is naturally acute, and also that time can be expanded and contracted with sound in a way that is difficult to do visually.  

Have fun choosing your datasets -- this is what will  make your piece cool!

Step 2: What Will Make the Sounds?

There are many many ways to make sounds, of course, but if you're talking about taking interpreting spreadsheets of data into sound, it's probably going to take some form of a computer, audio interface, and array of loudspeakers.  I happen to be an enthusiast for spatial sound and multichannel audio, so it was important for me to include that in this piece.  As a result, we decided on a 20-channel loudspeaker array, which was arranged to approximately represent the shape of San Francisco.  We mounted this loudspeaker array on a chain-link fence, because they are everywhere in the neighborhood where UP:SF was hosted.  

This portion will be entirely determined by budget.  We originally had designed system that was cheap but that would suit the basic needs of the project.  We ended up lucking out, and some friends at Harmon Int'l (owner of JBL loudspeakers) lent us 20 Control 25 AVs with 24 channels of Crown amplification.  We used audio interfaces we had from previous projects.  

The final signal chain went: 
Laptop w/ Max/MSP ->
aggregate device audio interface (Presonus Fireface + MOTU ultralite = 20 channels out) ->
3 Crown CT8150 ->
19 JBL Control 25 AVs

Step 3: From Data - Sound

The process of manipulating the data into useful sounds is the main creative challenge of this piece.  Often, you will download the csvs or what have you in a fairly unwieldy format, and they'll almost always have some data you wont need.  Because of this, you'll need to manipulate the datasets either before you begin the sonification process or use a program capable of both parsing your spreadsheet and creating audio.  You'll want to use a program like Super Collider, Pure Data, (both open source) or Max/MSP to be able to properly parse the information to get it where you need to go. 

It is difficult to explain this step in greater detail.  I ran into challenges and stumbling blocks that were in a large part unique to the specific data set I chose and the sounds I was trying to make with them.  Because these elements will vary greatly depending on what you might choose to sonify, specific instruction become difficult.  That said, I've added some screenshots of my code (using Max/MSP) below to give an idea.

Step 4: Composition

How you interpret your given dataset is a very personal decision.  The data sonification process is challenging in that it should be representative of the information you're sharing, but also compelling enough as a sound piece that people want to stay and listen.  I found that data varying several components of the sounds -- for instance, the wind section had sounds that varied in direction and amplitude based on wind velocity, but also varied in pitch based of the level of solar irradiance at the time of the measurement.  Hopefully people will be compelled to stay long enough to experience the various layers of the data, and enjoy themselves. 

Audio examples can be found here:

Step 5: Be Prepared to Explain Yourself

Sound is a remarkably rich medium to work with, in the context of this project specifically because the experience of data sonification is simultaneously intuitive and highly abstract.  Very few people understand what is happening with this work when they first experience it, but when they have the "legend" (the explanation for what sounds represent what) they are free to experience the piece in a enriching way.  At UP:SF, this mostly meant me standing there and explaining to people what was going on.  But there are, of course, many ways to do this (signage, apps, etc).  But definitely be prepared to explain yourself!

Step 6: Bring Friends!

Emily and I would like to thank Toby Lewis, Megan Gee, Morgan Kanninen and Brian Huey for their help with this project.  If you ever need to hang 20 loudspeakers on a chain link fence, make sure you bring friends!

Be the First to Share


    • Backyard Contest

      Backyard Contest
    • Silly Hats Speed Challenge

      Silly Hats Speed Challenge
    • Arduino Contest 2020

      Arduino Contest 2020