Ethics and Machine Learning: Adolescent Suicide Prediction on Social Media

Lead Research Organisation: University of Oxford
Department Name: Psychiatry

Abstract

Suicide is a leading cause of death for British adolescents aged 10-24 (Public Health England, 2018). Identifying adolescents who are at increased risk of suicide is a major challenge for clinicians because traditional assessment instruments (e.g. the Scale for Suicide Ideation and Beck's Suicide Intent
Scale) have low Positive Prediction Value (PPV; Runeson et al., 2017). The rapid growth of social media combined with its increasing integration into the daily lives of adolescents presents researchers with new opportunities for predicting suicide risk through new sources of behavioral data (e.g. McClellan, Ali,
Mutter, Kroutil, & Landwehr, 2017; Torous et al., 2018). This process is enabled by Machine Learning Algorithms (MLAs), which use inductive methods to generate new theories about patterns found in social media datasets (McClellan et al., 2017). These methods are used by universities, hospitals, and social
media companies. In the United States, Facebook uses MLAs to scan accounts for people at-risk of suicide (Kaste, 2018). While their algorithm is undisclosed, if it deems a person to be at "imminent risk" of suicide Facebook will alert local emergency responders, who may in turn visit the person for a wellness
check. In 2017 Facebook reportedly alerted local emergency responders over 3,500 times (Kaste, 2018). Ethicists have studied the concerns that may arise when adults are deemed "at-risk" of suicide by algorithms, however their exploration remains theoretical (Gomes de Andrade, Pawson, Muriello,
Donahue, & Guadagno, 2018; McKernan, Clayton, & Walsh, 2018; Tucker, Tackett, Glickman, & Reger, 2018). In addition, there have been no comparable discussions or empirical studies focusing on the impact of this practice on adolescents. Using MLAs with social media data may cause unique ethical concerns for
adolescents who spend more time on social media and use a greater variety of platforms compared to their adult counterparts (Smith & Anderson, 2018). In addition, suicide risk and attempts differ in form and fatality between adolescents and adults (Parellada et al., 2008). Legislation on consent to medical practices
may be another cause of age-related differences, as young people under the age of 16 can only provide
independent consent proportionate to their competence and therefore are not automatically entitled to give consent to medical practices (Gillick v. West Norfolk and Wisbech AHA, 1985). As such, a legal guardian may be making decisions on their child's behalf. Considering the range of ethical issues within this context is fundamental to minimize the potential for harm and maximize any benefits of interventions for adolescents. Therefore, the aim of my DPhil within the Department of Psychiatry is to study the ethical questions that may arise when adolescents are targeted by algorithm-based suicide predictions and interventions. Most of the ethics literature on suicide prediction with adults uses a principle-driven approach and three overarching bioethical principles: autonomy, beneficence, and privacy (Gomes de Andrade et al., 2018; McKernan et al., 2018; Tucker et al., 2018). Within the first phase of my DPhil, I will create a bioethics framework founded on these principles for the use of MLAs in adolescent suicide prediction on social media.

Publications

10 25 50