Broadband Acoustic Network | Lifewatch regional portal

You are here

Broadband Acoustic Network


Scientific background

Humans rely mainly on light to detect and understand their environment. But what do animals do in environments where little light is arriving? This is the case in marine ecosystems. Through fluids like water, sound propagates much more efficiently than light. Consequently, most of the species living underwater presumably do not rely on their vision to interact with their environment but on sound. At relatively low cost, marine soundscapes can be recorded over long periods of time in order to observe long term trends. They provide information on geophysical events and weather, on human activities and on the animals living in the environment—entirely non-invasively by passively listening at a distance. Soundscapes are often compared to identify good versus bad habitats, or changes of an environment over time. The broadband acoustic network will be recording continuously from 10 Hz to 50 kHz, covering most of geophonic sounds, anthropogenic noise (except sonar and technologies used to map the sea floor such as multibeam) and biophonic events. Higher frequency sound such as harbor porpoises clicks (120 - 145 kHz, mode 132 kHz) will be monitored by the Cetacean Passive Acoustic Network.


The broadband acoustic network will be functional from summer 2020. Four long-term acoustic recorders will be deployed on tripod frames on the sea bottom. These tripods are attached to a buoy with an acoustic release (Vemco VR2AR), allowing the recovery of the equipment. In some cases, C-PODs (see Cetacean Passive Acoustic Network) will also be attached to the same tripod. Two of the recorders will be deployed in fixed locations and the data is going to be downloaded every 5 months. The other two will be changed of location every 2 or 3 months. This strategy allows to compare spatial and temporal patterns.

Useful links:

  • pyhydrophone on GitHub: pyhydrophone is an open-source Python package that has been developed to ease the import of underwater sound data recorded with a hydrophone to python, so postprocessing and AI can be easily performed on the data afterwards. Different recorders can be added with their different way of reading metadata, so the scientists do not have to worry about he format but just about the outcome. It is still in constant development and improvement.
  • pyporcc on GitHub: pyporcc is an open-source Python package developed to detect and classify harbor porpoise’s clicks in audio files using the PorCC algorithm and offering the possibility to create new clicks classifiers. It provides a framework to train different models such Support Vector Machines, Linear Support Vector Machines, Random Forest and K-Nearest Neighbor that classify sound clips in Noise, and Low-Quality and High-Quality Harbor Porpoises’ clicks. The algorithm from PAMGuard to detect possible clicks clips is also implemented.