First outing for Sonikebana v1

Upstairs in St Cecilia’s concert hall are six boxes on wheels. Each box contains a small computer, amplifier, speakers, battery and a compass sensor. Playing through these boxes are sounds recorded in and around the Meadows in Edinburgh. As you move the boxes around, the sound changes. When you’ve discovered positions that you think you like, sit back, lie down, relax and wait for others to shift things. If you want to make a change, intervene…

Download (PDF, 1.34MB)

Close up of laser etching on speaker casing

Chris Watson: Inside the Circle of Fire

On 5th April, field recordist and sound artist Chris Watson helped set the CitySounds project into a wider context by presenting recordings from his sound installation Inside the circle of fire: A Sheffield Sound Map. Before a large audience in the Reid Concert Hall, Chris guided us through a project that describes the sound world of the city of his birth in dynamic multichannel sound.

Audience at Reid Concert Hall, University of Edinburgh

As Chris said: “We tend to hear everything, but we rarely listen. We live in such a noise-polluted environment.” The event was a great opportunity to focus attention on the richness of urban sounds rather than ignoring them.

CitySounds Makes a Noise

We are lining up a number of exciting events to round off the current phase of CitySounds. Check out the details below.

Thursday 5th April, Talk by Chris Watson

3:00 pm – 4:30 pm, Thursday 5th April, Reid Concert Hall, Bristo Square

We are holding a presentation featuring guest speaker Chris Watson. Chris is a field recordist and sound artist, and will reveal sounds from his sound installation Inside the Circle of Fire: A Sheffield Sound Map, a project that describes the sound world of the city of his birth in dynamic multichannel sound.

Registration (waitlist only) and more information here:

https://www.eventbrite.com/e/city-sounds-chris-watson-tickets-43932741011

Thursday 5th April, sonikebana v1

5:00 pm – 7:00 pm, Thursday 5th April and 2:00 pm – 5:00 pm Friday 6th April, St Cecilia’s Hall, 50 Niddry Street
Sound artist and composer Martin Parker has been listening to material recorded by Edinburgh’s CitySounds project and has been recording audio from the Meadows himself. He has placed some of these sounds inside custom-built portable loudspeakers. At this installation, visitors are invited to move the loudspeakers around the space in order to design and reorganise the soundscape as they hear-fit. The sounds playing are not fixed but actually morph-based on the direction that the speakers face — every change in position of the speaker will change the sounds you hear. Think of it as a kind of audible flower arranging.

Registration and more information here:

https://www.eventbrite.com/e/city-sounds-sonikebana-v1-tickets-44409895192

Friday 5th April, sonikebana v2

2:00 pm – 5:00 pm Friday 6th April, St Cecilia’s Hall, 50 Niddry Street
Sonikebana installation open to the public.

Friday 6th April, Zoë Irvine Workshop, Sensing Information from Sound

4:00 pm – 7:00 pm Friday 6th April, starting at St Cecilia’s Hall, 50 Niddry Street

Zoë Irvine is an artist working with sound, exploring voice, field recording and the relationship between sound and image. Join Zoë for a listening and recording sound walk around Edinburgh’s heartland. Rather than listening out for the usual ghouls, you’ll be listening for the noises made by people, their machines and the environmental sounds of nature too. You’ll then explore ways of revealing information about the soundscape and what everyone and everything is doing within it.

Registration (waitlist only) and more information here:

https://www.eventbrite.com/e/city-sounds-zoe-irvine-tickets-44549730443

CitySounds Public Workshop 1

The CitySounds project held two workshops on 19 February 2018, with special guest Kate Jones from University College London. The ideas for the workshops were conceived at our co-design workshop earlier this year.

Two aims that we identified for the community workshops were a) to find out what people might want to learn about nature and biodiversity in the city through sound (as well as potentially other forms of environmental monitoring and data collection) and b) to demonstrate how and what we can learn through the initial sound recordings coming from the project’s Audio Capture Devices and perhaps teach some basic skills in audio data analysis.

Our first workshop took place in the afternoon at the University of Edinburgh Informatics Forum. We had special guest Kate Jones from University College London, who presented an excellent example of learning about nature in the city through sounds — the Nature-Smart Cities project. The project brings together environmental researchers and technologists to develop the world’s first end-to-end open source system for monitoring bats, to be deployed and tested in the Queen Elizabeth Olympic Park, east London.

Kate gave a fantastic presentation about the project, starting with the foundation of monitoring biodiversity. How might we track biodiversity in urban areas and understand its role in helping us to live safely, productively and healthily? She encouraged us to imagine the Biodiversity version of ‘Industry 4.0’ — how could cyber-physical systems, Internet of Things, networks, data-driven and adaptive decision-making machines be employed to support biodiversity conservation and help stop the rapid loss of biodiversity across the planet?

Kate Jones describes data processing pipeline for bat monitors

Kate and her team developed the Echo Box, which is essentially a Shazam for bats. It picks up the frequencies that bats communicate with and uses an algorithm to identify the call and provide an indication of which species has been heard. It then sends the information back to a central server and displays the information online at http://www.batslondon.com/. Fifteen Echo Boxes are installed on lamp posts around Queen Elizabeth Olympic Park and have been continuously monitoring bats for three months.

Olympic Park Echo Box

While the original idea for the project came from Kate’s passion for biodiversity conservation, as other people found out about the publicly-available data, they generated their own ideas from it. A group of students built an arcade machine based on the data that has become a highlight at the visitor centre, while researchers added bat data to a 3D augmented reality visualisation of park. Another group devised small 3D-printed gnomes placed around the park that people could interact with via a chatbot to find out more about bats in the park.
‘Memory Gnome’ from Olympic Park

We were all thoroughly inspired by the incredible amount of work that went into the project and the possibilities for learning about nature through sound while also engaging a wider population with biodiversity in the city.

Simon Chapple then shared the vision for the CitySounds project and encouraged us to begin imagining all the things that we could learn through audio data. Smart sensors can recognise what is taking place in the environment, and an array of multiple sensors can work out spatially where a sound comes from. In a particular area, audio data can allow us to identify species of birds present, bat activity, volume of traffic, car accidents and more – and a wide spectrum microphone can even allow us to record mice screaming at each other!

Following Simon, Jonathan Silvertown sparked our imaginations to the possibilities of all the different creatures that are roaming around our cities and that we could potentially learn about through IoT and other technologically-advanced forms of biodiversity monitoring. He showed us the National Biodiversity Network’s Atlas of Scotland, which keeps a record of all the creatures that have been recorded in a particular area. So, from where we were in the Informatics Forum in the centre of the city, this is what we might find:

Screenshot of interactive map from NBN Atlas Scotland

We hope that the CitySounds project will provide not only a replicable method for learning about nature through sound but also a specific insight into the Edinburgh soundscape, from nature (weather, animals, birds, insects, bats), activities (walking, cycling, playing sport, festivities), transport (traffic, car horns, trains, planes), machines, electrical and electronic devices, breaking glass, noise pollution) through to the one o’clock gun, the many incidents of fireworks and the festivals large and small that take place around the city throughout the year.

Collaborative box building

Come and help us build some wooden tree boxes, which will be installed around the Meadows with microphones inside them for the CitySounds project. This will be a great chance to learn some basic woodwork skills, whilst also contributing to an exciting community project. No previous woodwork experience required!

Please register using this Eventbrite link.

Data is Flowing

We now have two Audio Capture Devices (ACDs) delivering encrypted audio data successfully to our CitySounds server. Interestingly, every now and again one of the ACDs loses some WiFi signal and goes dark for a minute or so — perhaps a delivery truck or other vehicle in the adjacent street is blocking the signal.

A separate server script picks up the audio data files as soon as they arrive and moves them to a separate, inaccessible file partition and re-encrypts them with a wholly separate encryption key.

Into the Trees

Monday 12th March was something of a landmark for us: Simon finally got to install one of our Audio Capture Devices (ACDs) on a tree in the Meadows! He is using a clever combination of bungee cords and bike cables to make sure that they are firmly attached.

BACD securely attached to a branch

A few teething issues in getting the ACDs to talk the server are being ironed out, and we should be able to report back soon on what data is being collected.

In preparation for this public launch, Silje toured notice boards around the Meadows to put up information leaflets. And for those who want to know more, we’ve added a QR code to the poster that points to our Privacy Notice.

One of the Meadows Notice Boards

Launch Countdown

So finally, we have been able to bring the full CitySounds Data Collector architecture online, and are now receiving encrypted audio data from our field test device, which is placed in a University private garden via our exernal WiFi AP mounted on the 5th floor of the Main Library.

The image above shows the 10-second audio samples (transferred via scp and separately encrypted with GnuPG) flowing through onto the CitySounds server from one of our Audio Capture Devices (ACDs). Never has a directory file listing looked so pretty!

Our Raspberry Pi based ACDs are also now fully time-synchronised through our local NTP server to ensure they work collectively and accurately to cover off each 60-second block of time. Once all six ACDs are deployed, they will each record a 10-second slice in sequence.

We are now on track to deploy to the trees in the Meadows in Edinburgh early next week: this will be a major accomplishment, especially given the additional extreme weather and strike issues we have been having to navigate the last couple of weeks.

Suspicious Noise in the Night

Shifting gear

I am in the thick of developing the sound installation for this project which will reveal some of the concepts behind our work and show some of the sounds that will be captured by the Simon Chapple’s sensor network. I’ll explain more about that in another post soon. Meanwhile, I’m taking a break from thinking about gel loudspeakers, compass and heading settings on mobile phones in order to say a little about my experience working with Simon’s Rasberry Pi, wireless, 192kHz audio beacon prototype earlier this year.

Simon lent me his prototype in order for me to hear what’s going on in my garden in late January and to run some noise profiling tests. I was keen to see if the small mammals that must live in the garden are interested and active around our compost heap. I dutifully positioned the sensor box where I hoped I’d hear mice and other mammals fighting over leftover potato peelings but sadly — as far as I can tell at least — nothing of the sort: no fights or mating rituals at this time of year. The absence of activity is useful, since it suggests that there has been plentiful food for small mammals to find earlier in the year/day and they’re not risking the wind, rain, snow and something new in the garden to get a late night snack. However, a largely quiet night means that the few moments of sonic event are all the more interesting and easy to spot.

A word on the tools I’m using

I’ve been using SoX to generate spectrograms of the 10-second audio clips collected. It’s a good way to quickly inspect if there is something of interest to listen to. With over 9 hours of material though, it’s not interesting to listen to the whole night again. Instead, I first joined all of the files together using a single command line from SoX in the terminal window on OS X:

sox *.wav joined.wav

I then generated a spectrogram of that very long .wav file. However, the resolution of a 9 hour file needs to be massive to give any interesting detail. Instead, I decided to group the files into blocks of an hour and then rendered a 2500 pixel spectrogram of each 10-second burst. It’s very quick to then scroll down through the images until something interesting appears. Here’s the .sh script I used:

### Generate for WAV
for file in *.wav; do
   outfile="${file%.*}.png"
   title_in_pic="${file%.*}"
   sox "$file" -n spectrogram -t "$title_in_pic" -o "$outfile" -x 2500
done

The above script was hacked from this GitHub gist by hrywlms.

Something suspicious

From the rendered spectrograms, I can quickly see that there were some interesting things happening at a few points in the night and can zoom in further. For example this moment at 1:43 am looks interesting:

Something suspicious at 1:43 am

It sounds like this:

I suspect that this is a mouse, cat or rat. Anyone reading this got a suggestion as to what it might be?

As the night drew on, the expected happened and birds began to appear — the first really obvious call was collected at 6:23 am. It looks like this:

First bird call 6:23 am

And it sounds like this:

Noise

If you’re able to hear this audio clip, then you’ll be aware of the noise that is encoded into the file. One of our challenges going forwards is how to reduce and remove this. I’ve tried noise profiling and attempted to reduce the noise from the spectra, but this has affected the sounds we want to hear and interpret. Bascially by removing the noise, you also remove parts of the sound that are useful. I’m reflecting on this and think that there are ways to improve how electricity is distributed to the Rasberry Pi in Simon’s box from the battery and whether we need some USB cables with capacitors built in to stop the noise. However, noise reduction may not be as important to others as it is to me. My speciality is sound itself, in particular, how things sound, I want to get rid of as much unnecessary noise as possible so that when I’m noisy, it’s intentional. However, for an IoT project, the listener isn’t going to be a human but a computer. Computers will analyse the files, computers will detect if something interesting is happening and computers will attempt to work out what that interesting thing is based on things it’s been told to look for. It’s highly likely that the noise, which very quickly makes these sounds seem irritating and hard to engage with for a human, may well be perfectly fine for a computer to deal with. Let’s see.