The audio data files are large in size and long in duration. 10-second samples at 192kHz are 3.5MB in size and and each ACD generates 1.26GB every 6 hours. It is difficult for humans to listen to these files. All of the sounds are scrambled below 7kHz to obscure human voice which makes the sound inherently noisy and disorienting. To spot trends and shifts over long periods of time requires an attentive form of listening that is also very difficult and time consuming.
We have developed an experimental application that allowed us to extract the spectral energy across each 10-second segment and save these aggregated snapshots of spectra alongside one another. With this application, is possible to rapidly ‘scrub’ through the spectral content but still use our ears to pick out areas of interest very quickly. When a spectral snapshot seems interesting or surprising, we can listen to the original sound file that made the spectra.
Another approach we have explored involves concatenating the files into 6-hour chunks and generating spectrograms of that entire period of time, as shown in the following figure [click image to expand]:
Long-duration spectrograms like this give an instant overview of a period that a researcher may find interesting. Whole days and different ACDs can quickly be compared while trends, anomalies and other features (such as sounds in the ultrasonic band) noticed at a glance. Researchers interested to explore more closely can then home in on these specific files and generate further close-up spectrograms of particular areas of interest. With several of these images lined up researchers can look longitudinally across the season or seasons, laterally across the day and spatially across however many boxes are making recordings.
Upstairs in St Cecilia’s concert hall are six boxes on wheels. Each box contains a small computer, amplifier, speakers, battery and a compass sensor. Playing through these boxes are sounds recorded in and around the Meadows in Edinburgh. As you move the boxes around, the sound changes. When you’ve discovered positions that you think you like, sit back, lie down, relax and wait for others to shift things. If you want to make a change, intervene…
On 5th April, field recordist and sound artist Chris Watson helped set the CitySounds project into a wider context by presenting recordings from his sound installation Inside the circle of fire: A Sheffield Sound Map. Before a large audience in the Reid Concert Hall, Chris guided us through a project that describes the sound world of the city of his birth in dynamic multichannel sound.
As Chris said: “We tend to hear everything, but we rarely listen. We live in such a noise-polluted environment.” The event was a great opportunity to focus attention on the richness of urban sounds rather than ignoring them.
Over the last couple of weeks, the Edinburgh Forge has provided space and facilities for a couple of sessions of DIY and DIWO (Do-It-With-Others) ACD box building. We owe a big thanks to Evan Morgan and colleagues at the Forge for all their help and support.
We are holding a presentation featuring guest speaker Chris Watson. Chris is a field recordist and sound artist, and will reveal sounds from his sound installation Inside the Circle of Fire: A Sheffield Sound Map, a project that describes the sound world of the city of his birth in dynamic multichannel sound.
Registration (waitlist only) and more information here:
5:00 pm – 7:00 pm, Thursday 5th April and 2:00 pm – 5:00 pm Friday 6th April, St Cecilia’s Hall, 50 Niddry Street
Sound artist and composer Martin Parker has been listening to material recorded by Edinburgh’s CitySounds project and has been recording audio from the Meadows himself. He has placed some of these sounds inside custom-built portable loudspeakers. At this installation, visitors are invited to move the loudspeakers around the space in order to design and reorganise the soundscape as they hear-fit. The sounds playing are not fixed but actually morph-based on the direction that the speakers face — every change in position of the speaker will change the sounds you hear. Think of it as a kind of audible flower arranging.
2:00 pm – 5:00 pm Friday 6th April, St Cecilia’s Hall, 50 Niddry Street
Sonikebana installation open to the public.
Friday 6th April, Zoë Irvine Workshop, Sensing Information from Sound
4:00 pm – 7:00 pm Friday 6th April, starting at St Cecilia’s Hall, 50 Niddry Street
Zoë Irvine is an artist working with sound, exploring voice, field recording and the relationship between sound and image. Join Zoë for a listening and recording sound walk around Edinburgh’s heartland. Rather than listening out for the usual ghouls, you’ll be listening for the noises made by people, their machines and the environmental sounds of nature too. You’ll then explore ways of revealing information about the soundscape and what everyone and everything is doing within it.
Registration (waitlist only) and more information here:
The CitySounds project held two workshops on 19 February 2018, with special guest Kate Jones from University College London. The ideas for the workshops were conceived at our co-design workshop earlier this year.
Two aims that we identified for the community workshops were a) to find out what people might want to learn about nature and biodiversity in the city through sound (as well as potentially other forms of environmental monitoring and data collection) and b) to demonstrate how and what we can learn through the initial sound recordings coming from the project’s Audio Capture Devices and perhaps teach some basic skills in audio data analysis.
Our first workshop took place in the afternoon at the University of Edinburgh Informatics Forum. We had special guest Kate Jones from University College London, who presented an excellent example of learning about nature in the city through sounds — the Nature-Smart Cities project. The project brings together environmental researchers and technologists to develop the world’s first end-to-end open source system for monitoring bats, to be deployed and tested in the Queen Elizabeth Olympic Park, east London.
Kate gave a fantastic presentation about the project, starting with the foundation of monitoring biodiversity. How might we track biodiversity in urban areas and understand its role in helping us to live safely, productively and healthily? She encouraged us to imagine the Biodiversity version of ‘Industry 4.0’ — how could cyber-physical systems, Internet of Things, networks, data-driven and adaptive decision-making machines be employed to support biodiversity conservation and help stop the rapid loss of biodiversity across the planet?
Kate and her team developed the Echo Box, which is essentially a Shazam for bats. It picks up the frequencies that bats communicate with and uses an algorithm to identify the call and provide an indication of which species has been heard. It then sends the information back to a central server and displays the information online at http://www.batslondon.com/. Fifteen Echo Boxes are installed on lamp posts around Queen Elizabeth Olympic Park and have been continuously monitoring bats for three months.
While the original idea for the project came from Kate’s passion for biodiversity conservation, as other people found out about the publicly-available data, they generated their own ideas from it. A group of students built an arcade machine based on the data that has become a highlight at the visitor centre, while researchers added bat data to a 3D augmented reality visualisation of park. Another group devised small 3D-printed gnomes placed around the park that people could interact with via a chatbot to find out more about bats in the park.
We were all thoroughly inspired by the incredible amount of work that went into the project and the possibilities for learning about nature through sound while also engaging a wider population with biodiversity in the city.
Simon Chapple then shared the vision for the CitySounds project and encouraged us to begin imagining all the things that we could learn through audio data. Smart sensors can recognise what is taking place in the environment, and an array of multiple sensors can work out spatially where a sound comes from. In a particular area, audio data can allow us to identify species of birds present, bat activity, volume of traffic, car accidents and more – and a wide spectrum microphone can even allow us to record mice screaming at each other!
Following Simon, Jonathan Silvertown sparked our imaginations to the possibilities of all the different creatures that are roaming around our cities and that we could potentially learn about through IoT and other technologically-advanced forms of biodiversity monitoring. He showed us the National Biodiversity Network’s Atlas of Scotland, which keeps a record of all the creatures that have been recorded in a particular area. So, from where we were in the Informatics Forum in the centre of the city, this is what we might find:
We hope that the CitySounds project will provide not only a replicable method for learning about nature through sound but also a specific insight into the Edinburgh soundscape, from nature (weather, animals, birds, insects, bats), activities (walking, cycling, playing sport, festivities), transport (traffic, car horns, trains, planes), machines, electrical and electronic devices, breaking glass, noise pollution) through to the one o’clock gun, the many incidents of fireworks and the festivals large and small that take place around the city throughout the year.
Come and help us build some wooden tree boxes, which will be installed around the Meadows with microphones inside them for the CitySounds project. This will be a great chance to learn some basic woodwork skills, whilst also contributing to an exciting community project. No previous woodwork experience required!
We now have two Audio Capture Devices (ACDs) delivering encrypted audio data successfully to our CitySounds server. Interestingly, every now and again one of the ACDs loses some WiFi signal and goes dark for a minute or so — perhaps a delivery truck or other vehicle in the adjacent street is blocking the signal.
A separate server script picks up the audio data files as soon as they arrive and moves them to a separate, inaccessible file partition and re-encrypts them with a wholly separate encryption key.
Monday 12th March was something of a landmark for us: Simon finally got to install one of our Audio Capture Devices (ACDs) on a tree in the Meadows! He is using a clever combination of bungee cords and bike cables to make sure that they are firmly attached.
A few teething issues in getting the ACDs to talk the server are being ironed out, and we should be able to report back soon on what data is being collected.
In preparation for this public launch, Silje toured notice boards around the Meadows to put up information leaflets. And for those who want to know more, we’ve added a QR code to the poster that points to our Privacy Notice.
So finally, we have been able to bring the full CitySounds Data Collector architecture online, and are now receiving encrypted audio data from our field test device, which is placed in a University private garden via our exernal WiFi AP mounted on the 5th floor of the Main Library.
The image above shows the 10-second audio samples (transferred via scp and separately encrypted with GnuPG) flowing through onto the CitySounds server from one of our Audio Capture Devices (ACDs). Never has a directory file listing looked so pretty!
Our Raspberry Pi based ACDs are also now fully time-synchronised through our local NTP server to ensure they work collectively and accurately to cover off each 60-second block of time. Once all six ACDs are deployed, they will each record a 10-second slice in sequence.
We are now on track to deploy to the trees in the Meadows in Edinburgh early next week: this will be a major accomplishment, especially given the additional extreme weather and strike issues we have been having to navigate the last couple of weeks.