"The networked body is a convergence of physical existence and digital interconnectedness, embodying the seamless integration of technology into our daily lives. In this symbiotic relationship, individuals navigate both the tangible world and the vast realm of cyberspace. Our bodies, once confined to the constraints of physical space, now extend their presence through online interactions, social media narratives, and virtual experiences. The networked body raises questions about identity, privacy, and the evolving nature of human connection, as our digital footprints become an integral part of our overall existence. As we navigate this interconnected landscape, the boundaries between the corporeal and the virtual blur, giving rise to a new paradigm where our essence is worked into the intricate fabric of the networked society".
ChatGPT, 27 December 2023
in response to the prompt "What is the networked body? Describe it in one paragraph."
If network culture conceptualises people and things as interconnected nodes, perhaps its ultimate realisation is found in the form of deep learning neural networks underpinning the most advanced AI models today. The very term "neural network" is stolen from biological concepts of the human body, where billions of chemically connected neurons in the human nervous system work together to produce our most basic functions through to our most advanced cognitive abilities. Artificial neural networks model connections of biological neurons as algorithmic weights between nodes, allowing the build-up of complex pattern-recognition systems which can now seemingly replicate our bodily functions – seeing, hearing, speaking, reasoning – to functions beyond human capability.
While there are many types of AI, and indeed many different types of machine learning algorithms, complex deep learning neural networks trained on Big Data have been the instigator of the advanced models that come to mind when talking about AI today – from the Large Language Models that underpin ChatGPT ("ChatGPT" 2023) and Google’s Bard ("Bard" 2023), to image and video-generation AIs such as Midjourney ("Midjourney" 2023), DALL-E ("DALL-E-3" 2023), Stable Diffusion ("Stability.AI" 2023) and Runway Gen-2 ("Gen-2: The Next Step Forward to Generative AI" 2023) through to music and sound generation AIs such as Google’s MusicLM (Agostinelli et al. 2023) and Meta’s MusicGen ("MusicGen: Simple and Controllable Music Generation" 2023) and AudioGen ("AudioGen: Textually-Guided Audio Generation" 2023). In the field of artistic production, it is generative AI that has attracted the fiercest criticism. Trained on the cultural and artistic output of human creators scraped from social media and the internet, most often without their consent, generative AI is the ultimate example of remix - reappropriation aggregated by probabilities and served up as synthetic creativity.
Beyond concerns about cultural capital and copyright protection, generative AI’s intrinsic nature as an aggregator of data has the potential to exacerbate the homogenization of culture, amplifying what is already amplified and risking a recolonisation of minority cultures and practices. As AI models fill the internet with bland and generic content (McMillan 2023), and then train on their own synthetic outputs in a cycle of inbreeding, this echo chamber effect becomes compounded and may eventually lead to model collapse (Acar 2023) (Morris 2023).
Echo Chamber is a work that speaks to these concerns, allowing visitors to the Grainger Museum to generate music based on their own musical input on an acoustic piano. In a similar manner to Scrape Elegy’s delegated performance, we used participatory practice so visitors could self-discover the problematic creative production of generative AI. Using a 16-channel speaker installation as physical metaphor for the public sphere, the participant experienced an increasingly intense chorus of simulacras echoing and amplifying their musical input.
​
Aside from generative AI, another cause of ethical concern and techno dystopian anxiety is the use of AI for surveillance. Guài is a participatory work that problematised the deployment of machine learning for biometric facial analysis using myth, virtual metaphor and interactive sound. Employing biometric analysis to draw conclusions about emotional and character traits from physical facial markers, we assigned participants to one of eight virtual avatars which they then embodied through an augmented reality (AR) mirror. Movement-responsive sound helped to create an immersive, game-like environment where participants could "become" their avatar.