Understanding Design Features of Family Apps and Design Choices made by Family App Developers

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

Introduction and Background
Children have established a significant presence online through mobile devices, leading to an increase in the number of apps designed for children. Mobile apps can provide educational value for children across the globe, giving developers the opportunity to make positive contribution. However, data monetisation remains the main source of income for developers in this space. Targeted ads or game promotions become the norms in the freemium apps, including those used by children. Children not only find them annoying and a waste of time, but also are often nudged to make choices that reduce their personal privacy and leave them more vulnerable to data tracking.

New initiatives are set up to improve children's data protection and online safety. For example, the ICO in the UK is setting up children-specific regulations, and the Federal Trade Commission in the US is calling responses to a review of the Children's Online Privacy Protection Act (also known as COPPA). However, to create proper regulations and incentivise developers to adopt age appropriate design guidelines, we need to understand (1) why responsible design choices are difficult for developers, (2) a better understanding of harms design features in children's apps may cause, and (3) tools for practitioners to effectively assess age appropriateness of children's apps.

Contribution
With this study, we make a twofold contribution.

First, current work on children's safety in the app ecosystem has primarily been focused on parental control apps. We understand features embedded in apps to protect children online, but we do not have a comprehensive understanding of the role of developers in this. Our study fills this knowledge gap, by performing a value sensitive investigation into the design choices made by developers.

Second, work on persuasive and dark patterns in children's apps have focused on the technical constructs of the features, rather than the consequences it has on children. As a result, less tangible and concrete dangers in children's apps remain unexplored. We take an interdisciplinary approach to solve this, by drawing from psychology and privacy theories, to develop tools to assess and understand harms produced by particular features.

Aims and Objectives
By the end of this study, we will:
- Understand design choices made by family app developers, based on personal beliefs, values, and their current technical understanding of the app marketplace.
- Understand design features commonly used in children's apps, for the purpose of empowering their privacy rights and/or harming their privacy rights.
- Have developed a tool for assessing the age appropriateness of children's apps, in the context of privacy and psychological harm.
- Deploy our tool on popular children's apps to gain a better understanding of the current privacy ecosystem in the Google App store.

Methods at a Glance
Understanding value and design choices
To gain a better understanding of design choices made by developers, we will conduct 20 interviews with Android app developers, whose apps are from the 'Family' genre on the Google Playstore. We will ask questions about their security practices and their values towards privacy and children.

Understanding and assessing harms of family apps
We will extract features from the 100 most popular children's apps and map them against known psychological and privacy theories. Based on this we will construct a framework to assess harms in apps not part of our initial corpus. We will run a focus group with practitioners to refine our framework, before deploying it on a new corpus of apps.

Alignment to EPSRC's Strategies and Research Areas
This project falls within the EPSRC Information and Communication Technologies (ICT) research area.

Planned Impact

It is part of the nature of Cyber Security - and a key reason for the urgency in developing new research approaches - that it now is a concern of every section of society, and so the successful CDT will have a very broad impact indeed. We will ensure impact for:

* The IT industry; vendors of hardware and software, and within this the IT Security industry;

* High value/high assurance sectors such as banking, bio-medical domains, and critical infrastructure, and more generally the CISO community across many industries;

* The mobile systems community, mobile service providers, handset and platform manufacturers, those developing the technologies of the internet of things, and smart cities;

* Defence sector, MoD/DSTL in particular, defence contractors, and the intelligence community;

* The public sector more generally, in its own activities and in increasingly important electronic engagement with the citizen;

* The not-for-profit sector, education, charities, and NGOs - many of whom work in highly contended contexts, but do not always have access to high-grade cyber defensive skills.

Impact in each of these will be achieved in fresh elaborations of threat and risk models; by developing new fundamental design approaches; through new methods of evaluation, incorporating usability criteria, privacy, and other societal concerns; and by developing prototype and proof-of-concept solutions exhibiting these characteristics. These impacts will retain focus through the way that the educational and research programme is structured - so that the academic and theoretical components are directed towards practical and anticipated problems motivated by the sectors listed here.

Publications

10 25 50