We now have two Audio Capture Devices (ACDs) delivering encrypted audio data successfully to our CitySounds server. Interestingly, every now and again one of the ACDs loses some WiFi signal and goes dark for a minute or so — perhaps a delivery truck or other vehicle in the adjacent street is blocking the signal.
A separate server script picks up the audio data files as soon as they arrive and moves them to a separate, inaccessible file partition and re-encrypts them with a wholly separate encryption key.
Monday 12th March was something of a landmark for us: Simon finally got to install one of our Audio Capture Devices (ACDs) on a tree in the Meadows! He is using a clever combination of bungee cords and bike cables to make sure that they are firmly attached.
A few teething issues in getting the ACDs to talk the server are being ironed out, and we should be able to report back soon on what data is being collected.
In preparation for this public launch, Silje toured notice boards around the Meadows to put up information leaflets. And for those who want to know more, we’ve added a QR code to the poster that points to our Privacy Notice.
So finally, we have been able to bring the full CitySounds Data Collector architecture online, and are now receiving encrypted audio data from our field test device, which is placed in a University private garden via our exernal WiFi AP mounted on the 5th floor of the Main Library.
The image above shows the 10-second audio samples (transferred via scp and separately encrypted with GnuPG) flowing through onto the CitySounds server from one of our Audio Capture Devices (ACDs). Never has a directory file listing looked so pretty!
Our Raspberry Pi based ACDs are also now fully time-synchronised through our local NTP server to ensure they work collectively and accurately to cover off each 60-second block of time. Once all six ACDs are deployed, they will each record a 10-second slice in sequence.
We are now on track to deploy to the trees in the Meadows in Edinburgh early next week: this will be a major accomplishment, especially given the additional extreme weather and strike issues we have been having to navigate the last couple of weeks.
I am in the thick of developing the sound installation for this project which will reveal some of the concepts behind our work and show some of the sounds that will be captured by the Simon Chapple’s sensor network. I’ll explain more about that in another post soon. Meanwhile, I’m taking a break from thinking about gel loudspeakers, compass and heading settings on mobile phones in order to say a little about my experience working with Simon’s Rasberry Pi, wireless, 192kHz audio beacon prototype earlier this year.
Simon lent me his prototype in order for me to hear what’s going on in my garden in late January and to run some noise profiling tests. I was keen to see if the small mammals that must live in the garden are interested and active around our compost heap. I dutifully positioned the sensor box where I hoped I’d hear mice and other mammals fighting over leftover potato peelings but sadly — as far as I can tell at least — nothing of the sort: no fights or mating rituals at this time of year. The absence of activity is useful, since it suggests that there has been plentiful food for small mammals to find earlier in the year/day and they’re not risking the wind, rain, snow and something new in the garden to get a late night snack. However, a largely quiet night means that the few moments of sonic event are all the more interesting and easy to spot.
A word on the tools I’m using
I’ve been using SoX to generate spectrograms of the 10-second audio clips collected. It’s a good way to quickly inspect if there is something of interest to listen to. With over 9 hours of material though, it’s not interesting to listen to the whole night again. Instead, I first joined all of the files together using a single command line from SoX in the terminal window on OS X:
sox *.wav joined.wav
I then generated a spectrogram of that very long .wav file. However, the resolution of a 9 hour file needs to be massive to give any interesting detail. Instead, I decided to group the files into blocks of an hour and then rendered a 2500 pixel spectrogram of each 10-second burst. It’s very quick to then scroll down through the images until something interesting appears. Here’s the .sh script I used:
### Generate for WAV
for file in *.wav; do
sox "$file" -n spectrogram -t "$title_in_pic" -o "$outfile" -x 2500
From the rendered spectrograms, I can quickly see that there were some interesting things happening at a few points in the night and can zoom in further. For example this moment at 1:43 am looks interesting:
It sounds like this:
I suspect that this is a mouse, cat or rat. Anyone reading this got a suggestion as to what it might be?
As the night drew on, the expected happened and birds began to appear — the first really obvious call was collected at 6:23 am. It looks like this:
And it sounds like this:
If you’re able to hear this audio clip, then you’ll be aware of the noise that is encoded into the file. One of our challenges going forwards is how to reduce and remove this. I’ve tried noise profiling and attempted to reduce the noise from the spectra, but this has affected the sounds we want to hear and interpret. Bascially by removing the noise, you also remove parts of the sound that are useful. I’m reflecting on this and think that there are ways to improve how electricity is distributed to the Rasberry Pi in Simon’s box from the battery and whether we need some USB cables with capacitors built in to stop the noise. However, noise reduction may not be as important to others as it is to me. My speciality is sound itself, in particular, how things sound, I want to get rid of as much unnecessary noise as possible so that when I’m noisy, it’s intentional. However, for an IoT project, the listener isn’t going to be a human but a computer. Computers will analyse the files, computers will detect if something interesting is happening and computers will attempt to work out what that interesting thing is based on things it’s been told to look for. It’s highly likely that the noise, which very quickly makes these sounds seem irritating and hard to engage with for a human, may well be perfectly fine for a computer to deal with. Let’s see.
Simon Chapple and I met with Peter Davidson, one of the City of Edinburgh Council’s Park Rangers, to look at the options for installing our Audio Capture Devices (ACDs) in trees across the Meadows. Although there was a fresh wind, we were fortunate that it was a clear, sunny day to carry out our survey.
To start off, Simon gave a brief introduction to his ‘bird box’ enclosures and electronic kit, and explained how they would be attached to the trees using bungee cords, plus a padlocked cable for security.
We then did a quick tour of parts of the Meadows where we could see that we were in range of the newly-installed WiFi Access Point, appropriately enough named ‘organicity’, The main challenge was to find trees with branches in the ‘goldilocks’ zone: high enough for the ACDs to be out of harm’s way, but not too high for us to change the battery if necessary. (No, we haven’t yet got the point where we can use solar panels or tap into the power source of lamp posts!) Another constraint is that we need to avoid trees which have been marked as possibly suffering from Dutch Elm Disease, though fortunately that doesn’t seem to be too prevalent on the Meadows.
We concluded with the happy feeling that there was a good number of trees that we could use when we are ready to launch the devices in public.
In a previous blogpost, we talked about how we were planning to organise a number of workshops as part of the CitySounds project. We’re now ready to launch the first one!
So please join us for our public workshops on how the Internet of Things and other new advances in technology can help us understand biodiversity and how the health of the urban greenspace contributes to the wellbeing of us all.
There will be two workshops, both of which will take place in the University of Edinburgh Informatics Forum on 19 February 2018. The workshops will present two projects — Nature-SmartCities in London and Edinburgh CitySounds — which are using the Internet of Things and bioacoustic monitoring to learn about biodiversity and nature in the urban landscape.
The first workshop is directed toward an academic and professional audience who are interested in research and application around the Internet of Things and data science in relation to biodiversity, health & wellbeing, and nature & greenspace in the city. It will take place 2:00pm–4:00pm.
The second workshop is a non-technical event, intended for anyone with a general interest in the connections between technology, data and biodiversity in the city. A key part of this workshop will be an interactive session in which we will generate and collect ideas and feedback about specific issues that are of interest to participants. We will also look at how we might use the Internet of Things to learn and communicate better about biodiversity in the city. The workshop will take place 5:30pm–7:45pm.
In order to capture full audio data, we will be using the Raspberry Pi Zero W boards to send data over WiFi, and we have now installed a new WiFi Access Point to receive the data. The Access Point is located on the South West corner of the University Main Library, as indicated by the blue arrow on this map:
The photo below shows a view of the library from the Meadows, followed by a close-up of the newly installed Access Point (a small grey box).
We are looking forward to testing the reception range of the new device.
An earlier post described my initial steps in building an audio monitoring device, and over the last couple of weeks, I have worked on putting the electronics inside an enclosure that is both waterproof and will not be too obtrusive when installed in a tree. We refer to it as the “bird-box”. The box is made largely of 3mm plywood, with some thicker wood framing. It’s been stained and varnished to weatherproof it. The design enables easy separate access to change the battery without dislodging the Raspberry Pi Zero W processor and the Ultramic. On the inside, we use hermetically sealed plastic lunch boxes to hold the sensitive electronics, with sealed punch-throughs for the various connecting cables. It’s cheap and very effective.
Our next step was to carry out some field-testing of the device. We decided to do this in the private garden of a University of Edinburgh property, close enough to the Meadows to capture representative samples of sounds in the environment. I installed a temporary WiFi access point in the building to pick up the data from the prototype device in the garden, which is collected on a laptop also sited within the building.
Here’s a small sample of what we recorded over the three days of wind, snow, rain and freezing temperatures. The unit performed well in these challenging conditions, including the 30,000 mAh power bank.
This audio sample is indicative of what kinds of things we can detect in the urban environment: an emergency siren in the background, a stonemason working on a nearby building, and a snatch of bird song. The spectrogram below illustrates the different frequency ranges at which the sounds occur, from 0kHz up to 20kHz.
The bottom pink line is ambient sound.
The faint wavey pink line above that is the siren.
The strong pink fence-like pattern above that is the sound of the stonemason tapping away.
Finally, the little pink burst (between 3kHz and 5kHz) just before the last two taps from the stonemason is the clearly-audible bird song.
Listen again whilst looking at the image and you can observe how the sounds interact with each other.
We are excited to see that the recording device, the WiFi router and the computer all seem to be working together well.
Pursuing our goal of collaborating with Edinburgh Living Landscapes and other partners to explore how soundscape data can support community engagement, education and citizen science and increase the value created by urban greenspace, we invited stakeholders and interested parties to an initial CitySounds Co-Design workshop on 9th January 2018.
It was a great event, full of ideas and enthusiasm. Here, we briefly mention the main topics of discussion.
Exploring and understanding the data that will be captured
The six audio monitoring devices will each record 10-second samples in rotation, focusing on biodiversity in the Meadows. The devices will operate 24/7.
We are hoping that these will pick up birds, bats (which cannot be heard by the human ear), rain, traffic noise, etc. It will be interesting to see how many anthropogenetic sounds occur in the ultrasonic range.
We should be able to detect bird sounds within a 50–100m range and bats within a 30m range. (Interesting fact: Bats are loud! Their signals are typically over 100 decibels)
We are in the process of installing a WiFi access point on the 6th floor of the University Main Library, facing the Meadows.
Data will be directly transferred via WiFi to a server—so no data will be kept on the devices themselves.
It was pointed out that it will be important to make it as easy as possible for small biodiversity organisations to access the collected audio data, since often these have little or no resources for dealing with technical intricacies.
Community engagement actions in the project: who are we targeting and what do we want to achieve?
We are planning to organise at least three community engagement events during the course of the project:
First data literacy workshop (open to stakeholders)
Second data literacy workshop (open to interested groups and the public)
A final sonic art exhibition open to the public.
We spent the last section of the workshop discussing various ideas for these events.
The two data literacy workshops
These workshops will be an opportunity to communicate with the public about acoustic data and to engage their interest in data, IoT and urban greenspaces. We discussed:
What are we trying to achieve in the workshops?
What issues should the workshops address?
How can these apply in general to biodiversity monitoring?
How can they apply to the green network across the city that Edinburgh Living Landscapes is creating?
What is the target audience for the workshops? People already involved in biodiversity activities?
Measuring impact of biodiversity initiatives in the city
How can Edinburgh Living Landscape, FOMBL, the CEC Biodiversity team, and other interested partners use acoustic data to create evidence and evaluate the impact of their work? We are hoping to continue the monitoring after March 2018 (i.e., beyond the period of funding from OrganiCity) — having 12 months of data or more would be valuable to us and to our partners.
FOMBL/Greening Our Street:
Can the monitoring help identify ‘green tunnels’ through the city? This would be really valuable information for shaping future biodiversity initiatives.
City of Edinburgh Council:
Because it is time-consuming and expensive to collect biodiversity data, much of the information about sites across the city is out of date. It would be very useful if IoT technology could be used to get much more timely biodiversity data. Amongst other things, this would give evidence to support continued protection of those greenspaces.
The Sonic Art Exhibition
We revisited plans for the end-of-project exhibition and event and considered whether to adapt or expand it. This event is intended to be both a response to the audio assets collected by project and simultaneously a way of engaging with the public. Martin Parker explained his original conception, where six speakers would each be controlled by a location-aware app on a phone, determining what, how and when sound comes out of the speaker. In addition, the speakers would be movable, and members of the audience could arrange and re-organise the soundscape within the physical exhibition space.
Ideas that we discussed included:
How can we build a biodiversity storytelling aspect to the sounds? Should we, for example, include information about bats as an accompaniment to the audio?
How will we represent ultrasonic sounds to the public?
Can we capture different times of day on speakers, so that people can hear sounds associated with the night, the morning etc.
Should we associate sounds from different parts of the Meadows with different parts of the room?
We are still working out the best processes and activities for our two data literacy workshops and the final sonic art exhibition, so watch out for further blog posts!
Although we held a number of meetings between different partners during the inception phase of the experiment, it was only on 4th December 2017 that most of us managed to meet face-to-face in the Alt-W LAB in Edinburgh’s City Art Centre.
After a quick review of the project deliverables, milestones and schedule, Simon Chapple provided an update on the audio capture framework and plans.
Graham Stone pointed out that it will be important to be able to correctly interpret *missing* audio data, such as the absence of certain bird sounds. He suggested that one way of providing a baseline of detectability would be to play pre-recorded samples of wildlife sounds at natural levels to determine the sensitivity range of sensors.
We agreed on the importance of providing transparent information about the project to relevant stakeholders and the general public. This will be addressed in the New Year as we make more progress on understanding the technical dimensions of the project.
Finally, we discussed the fact that we have very Little time to prepare the end-of-project sound installation, and planning the content and requirements for this will have to be addressed as soon as possible.