7th International Conference

Digital Culture & AudioVisual Challenges

Interdisciplinary Creativity in Arts and Technology

Hybrid - Corfu/Online, May 9-10, 2025

ShareThis
The Box: Chatbots, Emotions and Audiovisual Realities
Date and Time: 09/05/2025 (14:30-15:10)
Location: Ionian Academy
Stavroula Stasinou, Charalampos M. Liapis, Epameinondas Panagopoulos

Consider an audience composed of individuals, each engaging linguistically with a machine (laptop). Each participant (hereafter referred to as a "user") interacts with a chatbot, while a multi-label emotion classification system simultaneously analyzes both the chatbot’s and the user's utterances. The extracted emotional values from this real-time exchange are mapped to audiovisual outputs, producing a dynamic, abstract composition projected within the space. This happens within a specific, closed space. In such a space, the user's emotional states are modeled and projected in a way that transforms them into simulations, supplanting the subjective experience of emotions with machine-generated abstractions. 

From such a framework, one can address a variety of aspects. We mention three: 1) The technical aspect of the pipeline 2) The immersive aspect of the experience and 3) The interpretative aspect of the layout as a performative state of affairs. 

Regarding the technical aspect, the pipeline consists of an interchangeable modular layout. First, we have the conversational model, through which the user interacts. It utilizes a large language model architecture, with the specific choice of model being subject to experimentation to facilitate dialogue. A secondary machine learning component analyzes the interaction, focusing on sentiment analysis or emotion classification. Again, the specific choice of technique here is open to experimentation, allowing flexibility in how emotional nuances are captured. From this analysis, a series of numerical values representing emotional states are extracted. These values are then mapped to both audio and visual outputs, with each module offering distinct possibilities for implementation—from deterministic logic to stochastic AI frameworks. The methods used to convert and control these emotional representations are similarly open to exploration. Ultimately, this flexible pipeline allows the creation of a diverse range of personalized audiovisual experiences for the user.

Moving on to the immersiveness of the interactive installation, the interplay between the chatbot and the user creates an immersive feedback loop that generates an augmented layer of reality in which the interaction is not merely linguistic but also aesthetic and emotional. By modeling, classifying, and amplifying both the user’s emotional state and their engagement with the chatbot, the system constructs an audiovisual augmented reality that reinterprets subjective emotions through algorithmic abstraction. Such a recursive process compels users to confront a machine-mediated representation of their emotional states potentially prompting reflections on identity and authenticity, as well as the role and capacity of artificial intelligence in interpreting emotional realities.

Lastly, concerning interpretation, the user's mediated experience (comprising the projected visuals, the algorithmic sounds, and the overall interaction) has the potential to introduce various modalities, some of which may unveil a secondary experiential layer, potentially leading to a condition of novel layouts, even hyperreal ones. For instance, the user may interpret the interaction as uncovering a more profound, machine-revealed "truth" about their mental and emotional state, reinforcing the illusion of an objective, hyperreal self derived from numerical abstractions. 

Concluding, our framework’s immersive experience forms a layout of simulations that have the potential to be perceived not only as real but as more authentic than reality itself. The machine-mediated representations seem to blur the boundaries between subjective emotional realities and their audiovisual depictions. This can be realized through a variety of modules constituting the overall framework, from emotional representation and sentiment analysis to the choice of sonic and visual interpretations and depictions, each forming a context that leads to a specific, unique experiential layout. The project thus introduces a novel framework of integrating real-time emotion classification with interactive AI to generate dynamic audiovisual compositions, creating an immersive, emotion-driven art experience that includes a feedback loop of a linguistically performed participation. 

Stavroula Stasinou

Stavroula Stasinou is a multidisciplinary designer and researcher whose work lies at the intersection of graphic design, creative coding, and immersive media. She holds a BA in Graphic Arts and Visual Studies from the University of West Attica and an MA in Audiovisual Arts  from the Ionian University. Currently working as a Creative Director and Graphic Designer at the Institute of Computer Technology and Publications "Diophantus" (CTI), she has over 12 years of experience in EU-funded research and educational projects, including Horizon and Erasmus+, with a focus on visual communication, UX/UI design, and front-end development. Her artistic and academic interests include augmented reality, net art, and interactive installations that explore the emotional and aesthetic potentials of machine-human interaction. Stavroula’s work combines a strong graphic design sensibility with contemporary technological tools to create hybrid environments that are both visually compelling and intellectually engaging. She actively shares her artistic experiments and conceptual work through her online platforms, exploring themes of identity, digital embodiment, and emotional abstraction. 


Back

   
Text To SpeechText To Speech Text ReadabilityText Readability Color ContrastColor Contrast
Accessibility Options