Suspicious Noise in the Night

Shifting gear

I am in the thick of developing the sound installation for this project which will reveal some of the concepts behind our work and show some of the sounds that will be captured by the Simon Chapple’s sensor network. I’ll explain more about that in another post soon. Meanwhile, I’m taking a break from thinking about gel loudspeakers, compass and heading settings on mobile phones in order to say a little about my experience working with Simon’s Rasberry Pi, wireless, 192kHz audio beacon prototype earlier this year.

Simon lent me his prototype in order for me to hear what’s going on in my garden in late January and to run some noise profiling tests. I was keen to see if the small mammals that must live in the garden are interested and active around our compost heap. I dutifully positioned the sensor box where I hoped I’d hear mice and other mammals fighting over leftover potato peelings but sadly — as far as I can tell at least — nothing of the sort: no fights or mating rituals at this time of year. The absence of activity is useful, since it suggests that there has been plentiful food for small mammals to find earlier in the year/day and they’re not risking the wind, rain, snow and something new in the garden to get a late night snack. However, a largely quiet night means that the few moments of sonic event are all the more interesting and easy to spot.

A word on the tools I’m using

I’ve been using SoX to generate spectrograms of the 10-second audio clips collected. It’s a good way to quickly inspect if there is something of interest to listen to. With over 9 hours of material though, it’s not interesting to listen to the whole night again. Instead, I first joined all of the files together using a single command line from SoX in the terminal window on OS X:

sox *.wav joined.wav

I then generated a spectrogram of that very long .wav file. However, the resolution of a 9 hour file needs to be massive to give any interesting detail. Instead, I decided to group the files into blocks of an hour and then rendered a 2500 pixel spectrogram of each 10-second burst. It’s very quick to then scroll down through the images until something interesting appears. Here’s the .sh script I used:

### Generate for WAV
for file in *.wav; do
   outfile="${file%.*}.png"
   title_in_pic="${file%.*}"
   sox "$file" -n spectrogram -t "$title_in_pic" -o "$outfile" -x 2500
done

The above script was hacked from this GitHub gist by hrywlms.

Something suspicious

From the rendered spectrograms, I can quickly see that there were some interesting things happening at a few points in the night and can zoom in further. For example this moment at 1:43 am looks interesting:

Something suspicious at 1:43 am

It sounds like this:

I suspect that this is a mouse, cat or rat. Anyone reading this got a suggestion as to what it might be?

As the night drew on, the expected happened and birds began to appear — the first really obvious call was collected at 6:23 am. It looks like this:

First bird call 6:23 am

And it sounds like this:

Noise

If you’re able to hear this audio clip, then you’ll be aware of the noise that is encoded into the file. One of our challenges going forwards is how to reduce and remove this. I’ve tried noise profiling and attempted to reduce the noise from the spectra, but this has affected the sounds we want to hear and interpret. Bascially by removing the noise, you also remove parts of the sound that are useful. I’m reflecting on this and think that there are ways to improve how electricity is distributed to the Rasberry Pi in Simon’s box from the battery and whether we need some USB cables with capacitors built in to stop the noise. However, noise reduction may not be as important to others as it is to me. My speciality is sound itself, in particular, how things sound, I want to get rid of as much unnecessary noise as possible so that when I’m noisy, it’s intentional. However, for an IoT project, the listener isn’t going to be a human but a computer. Computers will analyse the files, computers will detect if something interesting is happening and computers will attempt to work out what that interesting thing is based on things it’s been told to look for. It’s highly likely that the noise, which very quickly makes these sounds seem irritating and hard to engage with for a human, may well be perfectly fine for a computer to deal with. Let’s see.

Site survey of trees on the Meadows

Simon Chapple and I met with Peter Davidson, one of the City of Edinburgh Council’s Park Rangers, to look at the options for installing our Audio Capture Devices (ACDs) in trees across the Meadows. Although there was a fresh wind, we were fortunate that it was a clear, sunny day to carry out our survey.

Simon and Peter sizing up a tree

To start off, Simon gave a brief introduction to his ‘bird box’ enclosures and electronic kit, and explained how they would be attached to the trees using bungee cords, plus a padlocked cable for security.

Screen grab showing 2 bars for the organicity WiFi Access Point
We then did a quick tour of parts of the Meadows where we could see that we were in range of the newly-installed WiFi Access Point, appropriately enough named ‘organicity’, The main challenge was to find trees with branches in the ‘goldilocks’ zone: high enough for the ACDs to be out of harm’s way, but not too high for us to change the battery if necessary. (No, we haven’t yet got the point where we can use solar panels or tap into the power source of lamp posts!) Another constraint is that we need to avoid trees which have been marked as possibly suffering from Dutch Elm Disease, though fortunately that doesn’t seem to be too prevalent on the Meadows.


Two views of the Community Garden supported by Greening our Street and FOMBL

We concluded with the happy feeling that there was a good number of trees that we could use when we are ready to launch the devices in public.

CitySounds Workshops on 19 February — sign up now!

By Benjamin Brock (Own work) [CC BY-SA 3.0], via Wikimedia Commons

In a previous blogpost, we talked about how we were planning to organise a number of workshops as part of the CitySounds project. We’re now ready to launch the first one!

So please join us for our public workshops on how the Internet of Things and other new advances in technology can help us understand biodiversity and how the health of the urban greenspace contributes to the wellbeing of us all.

There will be two workshops, both of which will take place in the University of Edinburgh Informatics Forum on 19 February 2018. The workshops will present two projects — Nature-SmartCities in London and Edinburgh CitySounds — which are using the Internet of Things and bioacoustic monitoring to learn about biodiversity and nature in the urban landscape.

The first workshop is directed toward an academic and professional audience who are interested in research and application around the Internet of Things and data science in relation to biodiversity, health & wellbeing, and nature & greenspace in the city. It will take place 2:00pm–4:00pm.

Sign up here: https://citysoundsiot.eventbrite.co.uk.

The second workshop is a non-technical event, intended for anyone with a general interest in the connections between technology, data and biodiversity in the city. A key part of this workshop will be an interactive session in which we will generate and collect ideas and feedback about specific issues that are of interest to participants. We will also look at how we might use the Internet of Things to learn and communicate better about biodiversity in the city. The workshop will take place 5:30pm–7:45pm.

Sign up here: https://edinburghcitysounds.eventbrite.co.uk.

We would especially encourage people who want to give input/feedback to come to the evening event, but anyone is welcome at either workshop. Complete information is available on the Eventbrite pages.

Hope to see you there!

Follow @EdiLivingLab on Twitter for updates and share our Tweet!