When the development of modern media and of computers crossed paths, according to Lev Manovich a new form of media was born, which was described by the term “new media”, characterised by multimodal narratives, where different multiple simultaneous information flows are used and require the user’s engagement. With the evolution of digital technologies, both as regards hardware and human-computer interaction interfaces, and software and multimedia content, a branch of “new media” has taken the “status” of a reality and we refer to it as: “extended reality”.
Extended Reality (XR) is an umbrella term, encompasses virtual, augmented, and mixed reality, as well as all future realities such technology might bring. The term covers the full spectrum from the real to virtual in the concept of virtual-reality continuum introduced by P. Milgram and F.Kishino back in 1994. XR is a new, computer-mediated hybrid reality experience we encounter by participating in a multimodal experience comprising narrative audiovisual content.
Moreover, the narrative audiovisual content developed by and for XR technologies, needs to engage the viewer and address the multimodal narrative needs.
This paper aims to explore AI, AR, Location-based narratives and user participation as means for creation of impactful XR experiences.
More specifically in the paper we will study and present the use of:
1. AR tools in interactive film platforms and Television
2. AI use in cinematic narratives
3. Interactive narrative as means for increasing empathy and engagement, and
4. User’s participation in both virtual and physical space, in new forms of performative arts, “extended” with digital technologies.
In the first chapter, multimodal narrative needs will be investigated through AR/MR technologies. Particularly, the way these technologies can be utilised in the documentary films and interactive film platforms in order to increase the viewer engagement and create an experiential model of viewing will be examined. Additively, these technologies will be researched through a TV series script example scene, and the potential of narrative as well as technological needs in this sector will be examined for the creation of an interactive script in which the viewer is actively engaged.
Αt this point, it is very important and will be analysed how the evolution of these technologies introduce new ways and channels of distribution of audiovisual content. For example, in 2021 Facebook announced the creation of its “metaverse”, the new digital three-dimensional universe, through which users will be able to come in contact in a different way, and on the other hand will give them the ability to "merge" the digital with the physical world in all aspects of everyday life, including their interaction with several forms of art and audiovisual content. This feature opens up new avenues for the presentation and distribution of audiovisual content, making the experience even more immersive and engaging through AR/MR technologies. Subsequently, new kind of content -suitable for distribution in these platforms- has to be created
In the second chapter concerning AI in cinema the aim will be to investigate multimodal narrative needs for new media AR/MR technologies. Particularly, the way AI tools can be useful to film and theatre acting and arouse viewer’s engagement will be examined.
In the third chapter, the researchers will aim to assess the impact of interactive narratives both in VR experiences on the user/player. Interactive storytelling as a tool to immerse the audience, create empathy and aid with processing impactful experiences.
Finally, yet importantly, in the fourth chapter, there will be an attempt to overview the use of AI tools such as algorithms, face and voice recognition systems, text to speech technologies etc. in performative arts. The user of these elements of AI can be the artist (director, actor, dancer etc.) but the spectator as well. Generally, the purpose of these experimental cases is to facilitate the production process and to involve the audience. In a more general context, will examine the way of using AI, AR tools and new forms of narrative, creating a new environment for digital, hybrid multimedia performances leading to mixed reality art projects where their real spectators participate with their physical presence both in the real and virtual space. The user’s/spectator’s experience when participating in a performative form of art that is location-based and/or is using new media, through specific case studies will be analysed.
In conclusion, the goal is to combine this research so as to reach a conclusion regarding the use of AI, AR, Location-based narratives and User participation for the purpose of the creation of an engaging XR experience for the user/spectator.
Argyro Papathanasiou. BASc in Music Technology and Acoustics engineering (TEI of Crete). M.A. in Art & VR (ASFA & Paris8). Currently she is a PhD candidate at the School of Film (AUTh), focusing on XR systems in Documentary films and Location Based Narratives. Her individual and team work have been presented in several conferences and festivals (incl. IEEE VR, International Conference FOE etc). She is co-founder and Managing Director of ViRA (Greece). From 2019 she collaborates with Pausilypon Films and Documatism as a special adviser on interactive film platforms design, and with the Studio Bahia organisation in the US as an XR designer.
Christina Chrysanthopoulou. M.Sc. in Architecture Engineering, from the NTUA, and a M.A. in "Art and Virtual Reality", a collaboration between the ASFA and the Paris 8 University. Currently, she is a PhD candidate at the School of Film Studies of the AUTH. She has several publications in conferences such as Hybrid City II in Athens, IEEE VR2015 in Arles, and VS-Games Barcelona 2017, and has participated in various exhibitions and festivals, such as the Ars Electronica Festival (Linz, 2015), the A.MAZE festival (Berlin, 2018), the ADAF 2020, and the Ars Electronica Garden Athens 2020.
Back