We live in a world of Big Data. Every minute of every day we are inundated with data or content. We are also changing the way we consume data and content. There’s a chance you’re reading this article on your phone, in between checking your email, sending a Tweet, or a text.
Data about how we perceive our world is ever-present. As Kelso said in Heat, “This stuff is just beamed out all over the place. I just know how to grab it.” The demands of our daily lives sometimes run headlong into our need to process all the data that is dumped on during the day.
To that end, a relatively new field has cropped up called data sonification. The concept is that humans are evolving in how we process data, so this methodology allows people to learn datasets by sonic means. If there is a dataset that compiles data over a long period of time, rather than putting it onto an x-y chart, data sonification puts the data into audio form. The reasoning behind data sonification, per prominent neuroscience studies, is that humans get data quicker aurally than visually. It takes 70 milliseconds to see and process data, while it takes just 20 milliseconds to do the same with our ears.
As an example, imagine a simplistic dataset of date and temperature. With data sonification, Jan 1st’s temperature point would start off low and then the sonic tones would build higher and higher until July/August, when they would start to decrease until December 31st. If you were to do this over a multi-year period, there would be a soothing rise and fall for you to listen to. Although I personally am not musically inclined, all humans have an inherent built-in feel for rhythm. It’s a primal part of our nature.
Data scientists are using this field from everything to charting solar winds, to median costs of rents along a New York City subway line, to charting floods based on climate change. Perhaps Pittsburgh and their Department of Innovation & Performance could look to utilize data sonification for their projects, as well. To me, one of the projects could involve a different way of police cars patrolling.
Imagine a police officer with the standard scanner going in the patrol car. But simultaneously, a dataset of historic crime records is playing sonically to their relative geographic position controlled by GPS. As the officer navigates through their beat route, the rise and fall of the sounds corresponds to the number of historic incidents that police have responded to. This could help the police officer stay more vigilant when in a known hotspot area of crime.
Perhaps the data could be contoured to specific types of crime, like drug activity for narcotics units or prostitution arrests for vice. Although the police officers who live these beats day to day are well aware of hotspots already, it could be a necessary tool for the higher-ups in the Police Department that have to take a broader view of crime. For this management subset of police, as they drive their whole territory with crime stats sonified, it may help them re-direct their units more effectively or show the need for more presence in a certain area.
At first glance (or listen, I suppose), data sonification doesn’t seem worth the hassle of converting it to another format. But if you believe that we are evolving as our world changes around us, then data sonification can be seen as the future of how we process and interpret the world around us.