The Deterrence of Deception in Socio-Technical Systems

Lead Research Organisation: University of Cambridge
Department Name: Computer Science and Technology

Abstract

We have assembled a team of computer scientists and psychologists to do breakthrough research on deception. This is not just the basic problem at the heart of cyber-crime, but is central to human behaviour. Deception is the flip side of cooperation; as our ancestors evolved to cooperate in larger groups, so also we evolved the ability to deceive - and to detect deception in others. This includes the ability to deceive ourselves, which in turn helps us deceive others.

The move of business and social life online is changing deception in many ways. Some of these are essentially mechanical: on the one hand it's easy for crooks to create good copies of bank websites, while on the other hand companies can collect and analyse ever more data to detect fraud. Other changes affect our behaviour; for example, as transactions become more impersonal, the inhibitions against cheating become less. It feels less wrong to defraud a website than to defraud a person, and as more commerce goes online, fraud is rising. There is a strong practical reason for finding ways of making transactions feel more personal again, and this is one aspect of our investigation.

But there are much bigger and deeper issues. Existing deception research has almost all dealt with the static case of whether the experimental subject could tell whether another person was lying. The answer is "usually not"; we're not good at detecting people telling lies in the laboratory where the stakes are low and there is no interaction. We will move this to a new level by studying how we can deter deception in interactive contexts. We have a long list of ideas we want to test, by building a framework in which people play games in conditions with players who are mechanical, anonymous, partly identifiable or socially connected, and where players can cheat but not punish, punish but not cheat, or both. We will explore conditions where "cheating" subjects believe they are breaking social norms against conditions where deception is socially acceptable, such as bluffing in online poker. We will do experiments in online interviewing to explore the circumstances in which subjects can detect deception interactively. Another thread will be exploring the extent to which surveillance, by humans, by software or both, can act as a deterrent to cheating. Finally we will investigate how all this relates to the perception of privacy, so we can better understand what forms of fraud surveillance are acceptable as well as effective.

The aim of our research is not merely to come up with better mechanisms and design principles for deterring deception at e-commerce websites, and indeed in social networks. It is to deepen our fundamental understanding of how deception works by exploring the different perspectives that online interaction creates, and allows us to manipulate. This will, at the level of basic science, enable us to understand better what it means to be human, and the potential for our humanity to be expressed, realised and developed in the complex socio-technical systems on which we are all coming to depend.

Planned Impact

In addition to the researchers' immediate professional circle of researchers into security and deception, the beneficiaries of this project will include the following wider groups.

First, academics studying psychology and behavioural economics are keenly interested in the cognitive basis of decision-making in the presence of risk and in the context of adversarial behaviour. We indeed hope a systematic study of deception deterrence online may contribute important insights to our deep understanding of what makes us human, given the centrality of deception in much human behaviour and the fact that deception is changing as things move online. If we do win such insights, the benefits will spread beyond academia to the popular understanding of psychology, of behaviour and indeed of ourselves.

Second, firms conducting retail business online suffer fraud losses of typically 0.3% of sales and refuse a further 4% of offered sales because of alarms from their fraud-detection systems. We believe that better design for deception deterrence can reduce 'friendly fraud', that is, fraud conducted by relatives, work colleagues etc, by perhaps a third, and it's thought that friendly fraud might be a third of the total. This holds out the prospect of reducing fraud by a tenth, or 0.03% of online turnover. This will amount to several tens of millions for the UK and in the high hundreds of millions worldwide. Further gains may be in prospect if fraud detection systems can be tunes to have a better ROC as the loss due to sales foregone might be reduced. Yet further fraud savings may be possible in electronic banking systems.

Third, the insurance industry has a keen interest in reducing fraudulent claims, which our project will also tackle directly. The figures are not as precisely known in this sector as the ground truth is harder to measure; however we believe that insurance fraud might be reduced by broadly the same order of magnitude - namely tens of millions a year for the UK.

Fourth, the public sector should benefit from having tested methods available for using behavioural techniques to reduce tax and welfare fraud. In discussions with DWP it has become clear that they do not have the IT resource to conduct their own experiments but are keen to adopt techniques developed elsewhere.

Finally, policymakers will benefit from having a clearer understanding of the costs of cybercrime and the behavioural pathways available to mitigate it. It is a policy goal that the UK become the best place in Europe for e-business, yet we are some ways behind; the most recent Eurostat survey put us second last behind Latvia. If we can acquire world-leading expertise at deterring deception, this may go a significant way towards catching up.
 
Description We made huge progress in the first year of the project with two big studies on insurance fraud, one on accommodation scams and one on browser warnings. We set up deception labs using auctions (UCL) and online games (Newcastle). We are investigating fairness approaches to deterring deception with studies ranging from fairness through altruistic punishment to gender roles.
We have done and are in the process of publishing considerable work on how deception can be deterred in environments from online auctions to online poker games, and have pioneered these methodologies for others to use.
We organised Decepticon, the world's first conference on deception; previously, research on this topic was scattered across five or six main venues. We got most of the world's top deception researchers to join the scientific committee. We got 150 attendees, including most of the top people in the world, and unanimous agreement to make Decepticon a regular event. The next instance was held in Stanford in 2017. We're discussing the third instance which will probably be in 2020 so that it's held in alternate years to Sarmac.
We also extended previous published work on detecting deception using body motion-capture suits, and extended it to detecting deception using radar and Kinect to measure how much a subject fidgets. This work was covered in the national and international press, and was also well received at Decepticon.
One of the Co-Is, Jeff Yan, got a job as a full professor at the university of Linnkoping.
Exploitation Route Firms can reduce fraud by treating customers fairly, and by designing websites so that this is evident. We have talking with Google and other major web service firms. We briefed top UK and European e-commerce firms at an Experian workshop, and banks at the Economist's European Retail Banking event. One of the postdocs on the project has been employed by TNO in the Netherlands, who want to develop the motion-capture deception-detection technology for government use; another postdoc is interviewing with Amazon.
Sectors Creative Economy

Digital/Communication/Information Technologies (including Software)

Electronics

Financial Services

and Management Consultancy

Leisure Activities

including Sports

Recreation and Tourism

Government

Democracy and Justice

Retail

URL http://research.deception.org.uk/
 
Description One of our postdocs (van der Zee) has been hired by a Dutch government lab, TNO, to traslate her work for use by the Dutch security services. In 2017 she got a faculty job in Rotterdam. She is now (2021) part-time there and part-time with the Dutch government. From December 2016 the other postdoc David Modic worked on a contract for Orange, the French telecomms firm, designing a deceptive honeypot to attract and analyse the behaviour of network attackers. From October 2018 he got a job with a company in Ljubljana. He is now (2021) part-time in industry and part-time at the University of Ljubljana. In addition, we worked with Google to establish that the "eyes" approach to getting subject attention didn't work, and that text warnings work better. This is informing their design of phishing warnings and certificate alerts.
Sector Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Government, Democracy and Justice
Impact Types Economic

Policy & public services

 
Description Utrecht university 
Organisation Utrecht University
Country Netherlands 
Sector Academic/University 
PI Contribution Sophie van der ee and I worked with Ronall Poppe of Utrecht on the deception detection experiments. Ronald is a signal-processing guy.
Collaborator Contribution Supporting the signal-processing software to analyse the experiments done with motion-capture suits, depth cameras and radar, and also conducting a number of the experiments on their premises.
Impact Jointly authoored papers, already entered
Start Year 2014
 
Company Name Focal Point Positioning 
Description Focal Point Positioning develops technology to allow the tracking of mobile devices in environments previously unreachable by traditional GPS signals. 
Year Established 2015 
Impact FPP Software is now licensed to OEMs who provide high-accuracy GPS for the autonomous vehicles industry and for high-end sportswear. Now that the relevant signals are available from common GPS chipsets, licensing discussions are continuing with phone OEMs. The relevance to the grant is that I got to know the founder, Ramsey Faragher, when I employed him to implement signal-processing aspects of Sophie van der Zee's deception detection experiments, using separate funding (a grant from Google). His other ideas in signal processing led him to start Focal Point after the academic work was done, and I backed the company.
Website http://www.focalpointpositioning.com
 
Description Decepticon 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact We set up and ran Deception, the world's first proper conference on deception. Previously, deception research had been fragmented, with people in cognitive psychology, social psychology, law enforcment, monitoring systems and cybercrime reporting results to their own subject conferences, and not interacting much. We assembled a program committee of a dozen of the top people across these disciplines, got them to publicize the event, and got over 150 people to attend. It was an extremely valuable event and there was strong agreement that it will be held every two years in the future
Year(s) Of Engagement Activity 2015
URL http://www.cl.cam.ac.uk/events/decepticon2015/
 
Description Publicity for body-motion deception detection 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact The body-motion capture deception detection work was covered widely on Dutch TV, as well as in English language media (which included a full page in the Guardian)
Year(s) Of Engagement Activity 2015
URL https://www.lightbluetouchpaper.org/2015/01/04/to-freeze-or-not-to-freeze/