Sonic time-lapse method optimization

Using soundscape recordings and the sound-lapse-assemblage method, various simulations were made to process those acoustic signals with the goal of producing uncompressed .wav files. Various simulation parameters were considered in the rendering process in order to accurately reproduce the soundscapes’ acoustic features captured by the 24-hour periodic recordings. Several conditions were analyzed and compared, such as the number and the longitudinal attributes of the samples, in order to produce an efficient lapsing effect for the wetlands’ soundscapes. Transitions were also an important concern, and various combination methods were tested with the goal of generating seamless auditory experiences. In collaboration with ornithologist Jorge Tomasevic, machine learning operations were implemented in order for the application to automatically recognize relevant wildlife features. Figure 3 shows the global structure of the soundlapse algorithm and figure 4 shows a spectrogram comparison between recordings with different durations (A, B, C, and D) and the continuous 24-hour recordings out of which samples were collected (bottom figure).

Spectrograms of various soundlapse files generated from a continuous 24-hour recording (bottom image) from the Parque Urbano Wetlands.