5th International Conference

Digital Culture & AudioVisual Challenges

Interdisciplinary Creativity in Arts and Technology

Hybrid - Corfu/Online, May 12-13, 2023

ShareThis
An interactive media artwork addressing AI bias
Date and Time: 12/05/2023 (16:15-17:30)
Location: Online
Caterina Antonopoulou
Keywords: Artificial Intelligence (AI), Personal Intelligent Assistants (PIAs), algorithmic bias, gender stereotypes, interactive media art, critical art

Recent research demonstrates that Artificially Intelligent (AI) systems are neither infallible nor objective. They embed biases, reinforce stereotypes and often reach erroneous or over-simplified conclusions. The impact of AI systems’ biased or erroneous decision-making grows, as these systems are becoming involved in more and more consequential decisions in various sectors of everyday life. AI systems take over tasks traditionally executed by humans and invade public and personal spaces, such as workplaces, hospitals, banks, lawcourts, and of course our homes.
This paper aims at investigating AI’s errors and biases, through the interactive media art installation Zackie PIA. Zackie introduces itself as a Personal Intelligent Assistant (PIA) designed to assist journalists to write better reports about the cases they cover, by retrieving and analysing all relevant data. However, Zackie PIA acts provocatively. It aggregates all the fake news about a given case from online resources, such as news portals and social media posts. The PIA’s algorithm stores the collected fake news in a database, and then it combines fragments of the aggregated content, according to a custom algorithm that functions as an automatic narrative generator. As a result, the final report is fake and self-contradictory. The work reveals the biases of these -normally considered as neutral- technologies, by demonstrating how an AI's decisions are influenced by its training data and its embedded algorithm. The design of the device imitates commercial PIAs that often appear as cylindrical boxes. However, there is a significant difference in the interaction design: instead of responding orally, Zackie PIA prints its responses on paper, creating a tangible and always accessible archive of all reports. The print archive facilitates the tracing of inconsistencies between current and past reports.
Additionally, the installation reflects on the ways that anthropomorphized AI devices, such as PIAs and chatbots, reinforce gender stereotypes by reproducing normative assumptions of gender and femininity. Most of these anthropomorphic AI systems are portrayed as gendered entities. They are endowed with feminine names, such Alexa, Eliza, Cortana, Nina, Silvia or Denise and display feminine attributes and traits, such as female voices, pronouns and (in some cases) avatars. They are also designed and programmed to perform tasks traditionally classified as feminine labour, such as tasks related to service or clerical work, frequently operating as secretaries, home assistants or caregivers. Thus, these anthropomorphic intelligent systems reproduce stereotypical and gendered behaviour patterns.

To take a critical look at this issue Zackie PIA was given the name of a queer person. It was named after Zak/Zackie Kostopoulos, an LGBTQIA+ activist and queer artist, performing as the drag queen Zackie Oh. Kostopoulos was murdered in 2018 in broad daylight in the centre of Athens. The crime was followed by an unprecedented dissemination of fake, contradictory, and constantly changing news reports via mainstream and social media. The murder of Zak Kostopoulos was used as a case study to ‘train’ the algorithm of Zackie PIA.
This paper summarizes major findings from recent research on AI bias and errors, and presents media art projects addressing biases in anthropomorphic AI systems. Furthermore, it introduces the project Zackie PIA, describing the concept of the installation, its technical implementation, the artist's design decisions and key observations on viewers' interaction with the artwork. Finally, it is argued that AI is neither objective nor infallible, as it relies on biased training datasets and embedded algorithms. Media art can contribute to identifying AI bias and foster critical thinking around the social implications of biased AI systems.

Caterina Antonopoulou

Caterina Antonopoulou is a media artist, engineer and researcher. She is currently an adjunct lecturer of media art in the department of Digital Arts and Cinema of the University of Athens. Caterina holds a PhD in Digital Art from the University of the Aegean, a Master’s in Digital Arts from the Pompeu Fabra University of Barcelona and a diploma in Computer Engineering from the National Technical University of Athens. https://peqpez.net/


Back
   
Text To SpeechText To Speech Text ReadabilityText Readability Color ContrastColor Contrast
Accessibility Options