Designing Knowledge Futures: Investigating the impact of Generative AI on the future of Knowledge Work.

Lead Research Organisation: University of Edinburgh
Department Name: College of Arts, Humanities & Social Sci

Abstract

The subject of Generative AI (GAI) and its impact on knowledge work is a fascinating and incredibly timely subject of considerable interest to me.

GAI is already profoundly impacting and augmenting knowledge work. Where earlier tranches of AI tools impacted the areas of human labour concerned with routine tasks, we're beginning to see how new technical developments in model architectures, such as the General Pre-trained Transformers and Diffusion Models, and in user experience design, such as dialogic (ChatGPT, Bard, etc.) and text-prompt based interfaces (DALL-E, Midjourney, etc.), are profoundly impacting non-routine problem-solving, cognitive, and creative labour.

Perhaps its most immediate and visible impact is in augmenting procedural tasks, such as the production of technical or administrative text. However, AI is also impacting both convergent and divergent thinking, by offering a convenient supply of potential solutions to problems, as well as aiding in the evaluation and optimisation of these solutions based on particular criteria.

With the advent of tools to generate novel images, music, and creative text based on simple text prompts, AI is dramatically influencing creative work. For those creative practitioners that work dynamically with emerging technology, these tools present exciting new developments in the creative process. However, for those practices characterised by a more static relationship to production technologies, these tools appear as threats to their economic sustainability.

There are two lines of inquiry particularly interested in exploring. The first is applying the formal methods of Research through Design and those exhibited through my practice to date towards the evaluation of technical proposals intended to ameliorate some of the negative impacts that GAI might have on creative practitioners. Systems such as those that identify GAI-produced media (e.g., AI watermarking technology) or those that automatically assign attribution to samples in training datasets based on computed visual similarity scores.

Second, I'm interested in how we understand and produce novelty in the context of knowledge and creative work, and how AI systems impede or augment this.

I have previously explored these issues through a multi-channel video work entitled "Future False Positive". The work attempts to illustrate the implications of reduced human involvement in interactions guided by machine learning systems, as well as the intrinsic historical bias exhibited by such systems - a concept that Berardi (2009) and Fisher (2014) conceptualise as the "slow cancellation of the future" and Pasquinelli refers to as the "dictatorship of the past" (2020). The film was produced using a Next Frame Prediction algorithm trained on a multi-camera dataset of self-driving car footage. The output of this was then passed through a self-driving car object detection algorithm, layering analysis on top of the video. The final result was a composition of semi-coherent visual forms drawn from the training data, annotated with bounding boxes displaying the confidence scores of the model. By chaining together prediction and classification machine learning algorithms, that is, models that attempt to produce and understand, I am attempting to visualise the effect of removing humans from the loop.

Publications

10 25 50