A façade as “thinking skin”, wrapped around a building—this doesn’t have to be a touchable, physical membrane. It might also be a virtual skin, an invisible network covering an architectural entity.
We started with the idea of an “augmented reality window” to collect and display data gathered from other modules in one convenient location. But after some tests and prototypes, we had to redefine our strategy: not just showing things, but to enable active collaboration between the different elements devel- oped for the “multimodal media madness” lab at the Media Computing Group (RWTH Aachen University).
So we started to look into collecting data with an intelligent sensor and actuator network consisting of multiple loosely coupled radio transmitters. Our first goal was to create services for other façade elements to react on different events being broadcasted, not depending on a specific sensor module. The next step was creating a use case that transcends the use of sensor elements as simple data sources for the actual façade and enables direct interaction with the architectural environment. For example you could take the light switch from a wall nearby and bring it with you to your desk, so you could direct the lighting from the place where you need it—and where it is much easier to assess. But as there’s no fixed 1:1 relationship from sensor to actuator, there are still other services and sensor readings that might be relevant, e.g. a PIR movement sensor, overall brightness value, etc.
We chose the Arduino environment to create our evaluation nodes, using ubiquitously available AVM ATMega Chips and HopeRF radio modules. This cost efficient hardware makes it easier to achieve a closed-meshed sensor network—our “thinking skin”.
If you are interested in finding out more about our project, check our documentation file attached.
You may also visit our Bitbucket repository, where our m3RFM library and source codes for the sensor systems as well as the Eagle-files for the m3RFM shields are available.