Designing Usable Privacy Solutions for Conversational AI Interfaces (CAIs)
Lead Research Organisation:
UNIVERSITY COLLEGE LONDON
Department Name: Computer Science
Abstract
The aim of this PhD project is to explore the design space of user-centred tools that support
information privacy in conversational AIs. The past two years have seen significant strides in the
capabilities of Large Language Models (LLMs), leading to widespread deployment in downstream
apps such as chatbots and document preparation tools [6, 16]. However, the productionising of
deep learning models which are trained on user data introduce a suite of privacy concerns. These
include a lack of transparency and informed consent from data subjects [17], technical difficulties in
data erasure [18], and security vulnerabilities that threaten user privacy and safety [2, 3, 12]. While
research within AI/ML is increasingly attending to privacy and security issues in LLMs, this work
remains mechanistic in its focus, e.g., simulating cyberattacks on model architecture [10, 5, 15, 7].
Few studies investigate privacy in deployed AI apps from a usability or human factors perspective,
even though strong transparency and user control is crucial to responsible innovation in AI [11, 19, 1].
Currently, the industry standard in transparency and user control is a privacy policy and a settings screen with a pre-defined set of privacy options. The limitations of these approaches have
been well-documented in past work; privacy policies are rarely read [14], and privacy features are
often 'hidden' within interfaces [4] or suffer usability flaws [19]. Even newer solutions often lack explicit consideration of users' needs and contexts-for instance, tools which automatically anonymise
chatbot prompts are not tailored to the task being performed, or do not factor in users' preferences
for disclosure [13, 8]. As such, there is a need for usable privacy tools which integrate better with
conversational interfaces, and adapt to users' needs. This project will apply user-centered design
methodologies to guide the development of usable tools that enhance the privacy of CAI users, and
better integrate with their existing work practices.
information privacy in conversational AIs. The past two years have seen significant strides in the
capabilities of Large Language Models (LLMs), leading to widespread deployment in downstream
apps such as chatbots and document preparation tools [6, 16]. However, the productionising of
deep learning models which are trained on user data introduce a suite of privacy concerns. These
include a lack of transparency and informed consent from data subjects [17], technical difficulties in
data erasure [18], and security vulnerabilities that threaten user privacy and safety [2, 3, 12]. While
research within AI/ML is increasingly attending to privacy and security issues in LLMs, this work
remains mechanistic in its focus, e.g., simulating cyberattacks on model architecture [10, 5, 15, 7].
Few studies investigate privacy in deployed AI apps from a usability or human factors perspective,
even though strong transparency and user control is crucial to responsible innovation in AI [11, 19, 1].
Currently, the industry standard in transparency and user control is a privacy policy and a settings screen with a pre-defined set of privacy options. The limitations of these approaches have
been well-documented in past work; privacy policies are rarely read [14], and privacy features are
often 'hidden' within interfaces [4] or suffer usability flaws [19]. Even newer solutions often lack explicit consideration of users' needs and contexts-for instance, tools which automatically anonymise
chatbot prompts are not tailored to the task being performed, or do not factor in users' preferences
for disclosure [13, 8]. As such, there is a need for usable privacy tools which integrate better with
conversational interfaces, and adapt to users' needs. This project will apply user-centered design
methodologies to guide the development of usable tools that enhance the privacy of CAI users, and
better integrate with their existing work practices.
Organisations
People |
ORCID iD |
| Lisa Malki (Student) |
Studentship Projects
| Project Reference | Relationship | Related To | Start | End | Student Name |
|---|---|---|---|---|---|
| EP/S022503/1 | 31/03/2019 | 23/11/2028 | |||
| 2873742 | Studentship | EP/S022503/1 | 30/09/2023 | 31/03/2028 | Lisa Malki |