This paper presents a personalised immersive soundscape system that integrates natural sound content, algorithmic spectral regulation, and spatial audio rendering to support coherent and perceptually balanced soundscape construction. The system enables users to assemble individualised soundscapes by selecting natural recordings assigned to low-, mid-, and high-frequency roles, while system-level constraints regulate how these elements interact.
Spectral balance is achieved through a nonlinear least-squares optimisation procedure that computes relative gain adjustments across selected sounds, minimising spectral unevenness in a perceptually motivated one-third-octave representation. This approach supports broadband balance without prescribing specific sound materials, allowing user preference to be preserved within an ecologically informed design framework.
The resulting soundscape is rendered using an immersive spatial audio pipeline based on third-order Ambisonics and binaural decoding, supporting headphone-based listening, virtual environments, and spatial installations. Optional head-tracking can be employed to stabilise spatial perception during listener movement without affecting spectral regulation or content selection.
By translating principles from acoustic ecology into explicit computational constraints, the proposed system demonstrates how personalised soundscapes can be constructed in a controlled yet flexible manner. The work contributes a design-oriented framework for computational soundscape construction and highlights opportunities for applying algorithmically mediated natural soundscapes in digital culture, including immersive media, sound art, and interactive environments.
Back





