This research explores a live coding approach for managing and sonifying data for artistic purposes. The aim is to develop an interactive environment which the user can modify and adapt while the data are being sonified or displayed in real time. We believe that the exploration of scientific data through live interaction produced by the data can lead to new kinds of insights about the nature of these data. Moreover, embodied interaction with the data afforded through dance or free dance-like movement with wearable sensors emphasises and intuitive exploration of the data. We are interested to investigate what insights can be gained by comparing techniques developed for driving sound synthesis from the movements of dancers with the generation of sound from streams of scientific data. Our long term goal is to develop methods for exploration of scientific data through dance or dance-like free movement.
Sonification is employed as a technique for creation of content in purely sonic artworks, and in audiovisual Virtual or Augmented Reality environments. It is tangentially related to the field of Acoustic Display, although it may be seen as a special or partially separate subfield with scientific and artistic applications. Historically, the ability to navigate through data sets in real time while they are being displayed or sonified is as yet underexplored, because of ties with legacy static informational graphics. However, the temporal nature of sound, coupled with the development of interactive live coding environments created strong interest in interactive and live aspcets of sonification. Alberto de Campo's Thesis "Science by Ear: An Interdisciplinary Approach to Sonifying Scientific Data" (2009), and the Sonification Handbook (Hermann et al. 2011), investigate the perceptual mechanisms and condigions for sonification, and propose several different implementation approaches. Since then, the field has expanded drastically and many different approaches have been proposed (see references in Kalonaris and Zannos 2021). In parallel, research on musical interaction interfaces is increasingly exploring new approaches stemming on Machine Learning and Machine Listening such as statistical feature recognition and unsupervised learning techniques. In our research we investigate how these techniques can improve interactivity in sonification, in order to create tools for exploration of data through sonification controlled by dance. Consequently, our research sets the following goals:
1. Live exploration of the data space based on auditory feedback and using wearable sensors as an interface.
2. Use of live coding to interact with the sonification and visualisation mechanism at all levels while the data are being displayed in real time.
3. Investigation and adaptation of statistical feature recognition and unsupervised learning techniques in the sonification process.
We base our work on three types of data sets:
1. RefSeq Genomes of the SARS-CoV-2 virus from the SARS-CoV-2 Data Hub of the US National Health Institute. These contains the genome secuences of virus samples in the form of text strings using the letter symbols for the four types of nitrogen bases found in nucleotides (A, T, G, C) ("Symbolic Data").
2. Real time Electromagnetic wave measurements from Solar Winds (1 sample per minute) from the Space Weather Prediction Center of the National Oceanic and Atmospheric Administration (USA) ("Numeric Data").
3. Motion Data from Dance Experiments with our own Interactive Sound Synthesis system, using wireless wearable accelerometer sensors.
For input of dance movements, we use the sensestage system consisting of wearable motion tracking sensors dancers using Arduino-programmable microprocessors with XBee (Zigbee) mesh-network as well a self-built Raspberry Pi Zero based system with Wifi, 6-DOF motion sensors and pressable on-off buttons.
We explore two families of sonification strategies according to the types of data: For symbolic data we define correspondences to dicrete events of different types (pitches, samples, durations, or metric positions in a beat pattern). For numeric data, we use Parameter Mapping (PMSon, see Hermann et al 2011: 275). As synthesis algorithms for Parameter Mapping we use chaotic sound synthesis algorithms (based on chaotic Unit Generators or on Feedback techniques generating chaotic behavior), because these produce a strikingly rich spectrum of sounds that we consider a suitable challenge for exploration through movement. We use tSNE algorithm (Moore 2019) to develop multiparametric mappings to ease the interaction complexity with these algotithms. After testing these with the dancers, we use the same settings as source for the Parametric Mapping for sonifying numeric data. The final objective is to create a framework that enables musicians and researchers to explore different sonification strategies. We explain how this works based on the SuperCollider library of one of the authors.
Back