Suspicious Noise in the Night

Shifting gear

I am in the thick of developing the sound installation for this project which will reveal some of the concepts behind our work and show some of the sounds that will be captured by the Simon Chapple’s sensor network. I’ll explain more about that in another post soon. Meanwhile, I’m taking a break from thinking about gel loudspeakers, compass and heading settings on mobile phones in order to say a little about my experience working with Simon’s Rasberry Pi, wireless, 192kHz audio beacon prototype earlier this year.

Simon lent me his prototype in order for me to hear what’s going on in my garden in late January and to run some noise profiling tests. I was keen to see if the small mammals that must live in the garden are interested and active around our compost heap. I dutifully positioned the sensor box where I hoped I’d hear mice and other mammals fighting over leftover potato peelings but sadly — as far as I can tell at least — nothing of the sort: no fights or mating rituals at this time of year. The absence of activity is useful, since it suggests that there has been plentiful food for small mammals to find earlier in the year/day and they’re not risking the wind, rain, snow and something new in the garden to get a late night snack. However, a largely quiet night means that the few moments of sonic event are all the more interesting and easy to spot.

A word on the tools I’m using

I’ve been using SoX to generate spectrograms of the 10-second audio clips collected. It’s a good way to quickly inspect if there is something of interest to listen to. With over 9 hours of material though, it’s not interesting to listen to the whole night again. Instead, I first joined all of the files together using a single command line from SoX in the terminal window on OS X:

sox *.wav joined.wav

I then generated a spectrogram of that very long .wav file. However, the resolution of a 9 hour file needs to be massive to give any interesting detail. Instead, I decided to group the files into blocks of an hour and then rendered a 2500 pixel spectrogram of each 10-second burst. It’s very quick to then scroll down through the images until something interesting appears. Here’s the .sh script I used:

### Generate for WAV
for file in *.wav; do
   outfile="${file%.*}.png"
   title_in_pic="${file%.*}"
   sox "$file" -n spectrogram -t "$title_in_pic" -o "$outfile" -x 2500
done

The above script was hacked from this GitHub gist by hrywlms.

Something suspicious

From the rendered spectrograms, I can quickly see that there were some interesting things happening at a few points in the night and can zoom in further. For example this moment at 1:43 am looks interesting:

Something suspicious at 1:43 am

It sounds like this:

I suspect that this is a mouse, cat or rat. Anyone reading this got a suggestion as to what it might be?

As the night drew on, the expected happened and birds began to appear — the first really obvious call was collected at 6:23 am. It looks like this:

First bird call 6:23 am

And it sounds like this:

Noise

If you’re able to hear this audio clip, then you’ll be aware of the noise that is encoded into the file. One of our challenges going forwards is how to reduce and remove this. I’ve tried noise profiling and attempted to reduce the noise from the spectra, but this has affected the sounds we want to hear and interpret. Bascially by removing the noise, you also remove parts of the sound that are useful. I’m reflecting on this and think that there are ways to improve how electricity is distributed to the Rasberry Pi in Simon’s box from the battery and whether we need some USB cables with capacitors built in to stop the noise. However, noise reduction may not be as important to others as it is to me. My speciality is sound itself, in particular, how things sound, I want to get rid of as much unnecessary noise as possible so that when I’m noisy, it’s intentional. However, for an IoT project, the listener isn’t going to be a human but a computer. Computers will analyse the files, computers will detect if something interesting is happening and computers will attempt to work out what that interesting thing is based on things it’s been told to look for. It’s highly likely that the noise, which very quickly makes these sounds seem irritating and hard to engage with for a human, may well be perfectly fine for a computer to deal with. Let’s see.

Community Co-Design Workshop

Pursuing our goal of collaborating with Edinburgh Living Landscapes and other partners to explore how soundscape data can support community engagement, education and citizen science and increase the value created by urban greenspace, we invited stakeholders and interested parties to an initial CitySounds Co-Design workshop on 9th January 2018.

We were excited to see interest from across a wide range of disciplines and organisations, with participation from Scottish Wildlife Trust, The University of Edinburgh, City of Edinburgh Council Biodiversity team, Friends of the Meadows and Bruntsfield Links (FOMBL), the Bat Conservation Trust, Greening Our Streets and New Media Scotland.

It was a great event, full of ideas and enthusiasm. Here, we briefly mention the main topics of discussion.

Round table discussion

Exploring and understanding the data that will be captured

  • The six audio monitoring devices will each record 10-second samples in rotation, focusing on biodiversity in the Meadows. The devices will operate 24/7.
  • We are hoping that these will pick up birds, bats (which cannot be heard by the human ear), rain, traffic noise, etc. It will be interesting to see how many anthropogenetic sounds occur in the ultrasonic range.
  • We should be able to detect bird sounds within a 50–100m range and bats within a 30m range. (Interesting fact: Bats are loud! Their signals are typically over 100 decibels)
  • We are in the process of installing a WiFi access point on the 6th floor of the University Main Library, facing the Meadows.
  • Data will be directly transferred via WiFi to a server—so no data will be kept on the devices themselves.
  • It was pointed out that it will be important to make it as easy as possible for small biodiversity organisations to access the collected audio data, since often these have little or no resources for dealing with technical intricacies.
Simon Chapple illustrates spectrogram of audio sample and map of proposed WiFi access point.

Community engagement actions in the project: who are we targeting and what do we want to achieve?

We are planning to organise at least three community engagement events during the course of the project:

  • First data literacy workshop (open to stakeholders)
  • Second data literacy workshop (open to interested groups and the public)
  • A final sonic art exhibition open to the public.

We spent the last section of the workshop discussing various ideas for these events.

Whiteboard capture of ideas for engagement events

The two data literacy workshops

These workshops will be an opportunity to communicate with the public about acoustic data and to engage their interest in data, IoT and urban greenspaces. We discussed:

  • What are we trying to achieve in the workshops?
  • What issues should the workshops address?
  • How can these apply in general to biodiversity monitoring?
  • How can they apply to the green network across the city that Edinburgh Living Landscapes is creating?
  • What is the target audience for the workshops? People already involved in biodiversity activities?
Co-designing in action

Measuring impact of biodiversity initiatives in the city

How can Edinburgh Living Landscape, FOMBL, the CEC Biodiversity team, and other interested partners use acoustic data to create evidence and evaluate the impact of their work? We are hoping to continue the monitoring after March 2018 (i.e., beyond the period of funding from OrganiCity) — having 12 months of data or more would be valuable to us and to our partners.

FOMBL/Greening Our Street:
Can the monitoring help identify ‘green tunnels’ through the city? This would be really valuable information for shaping future biodiversity initiatives.
City of Edinburgh Council:
Because it is time-consuming and expensive to collect biodiversity data, much of the information about sites across the city is out of date. It would be very useful if IoT technology could be used to get much more timely biodiversity data. Amongst other things, this would give evidence to support continued protection of those greenspaces.

The Sonic Art Exhibition

Martin Parker explains plan for sonic art exhibition.

We revisited plans for the end-of-project exhibition and event and considered whether to adapt or expand it. This event is intended to be both a response to the audio assets collected by project and simultaneously a way of engaging with the public. Martin Parker explained his original conception, where six speakers would each be controlled by a location-aware app on a phone, determining what, how and when sound comes out of the speaker. In addition, the speakers would be movable, and members of the audience could arrange and re-organise the soundscape within the physical exhibition space.

Ideas that we discussed included:

  • How can we build a biodiversity storytelling aspect to the sounds? Should we, for example, include information about bats as an accompaniment to the audio?
  • How will we represent ultrasonic sounds to the public?
  • Can we capture different times of day on speakers, so that people can hear sounds associated with the night, the morning etc.
  • Should we associate sounds from different parts of the Meadows with different parts of the room?

We are still working out the best processes and activities for our two data literacy workshops and the final sonic art exhibition, so watch out for further blog posts!