The project presents an interactive soundscape based on real-time algorithmic sound synthesis, embedded within a 3D, virtual outdoor environment. The user is invited to explore this virtual world, where objects scattered throughout the space—featuring diverse materials and sonic properties—invite them into active interaction.
The sounds are not pre-recorded but are generated algorithmically via procedural audio models, exploring the boundaries of natural sound mimesis. By combining various techniques (e.g., Additive, Subtractive, Modal, and FM synthesis), the project shapes a hybrid sonic language that hovers between the recognizable and the imaginary.
On an artistic level, the project explores listening as an active practice. Movement and interaction within the space simultaneously transform into a compositional gesture, making the visitor an organic part of this living soundscape.
The visual and interactive implementation of the project was developed in the Unity engine, with the sounds designed using the Faust and ChucK programming languages. The project is part of ongoing PhD research focusing on the relationship between nature and sound mimesis.