Privacy techniques for mobile computing

Lead Research Organisation: University of Cambridge
Department Name: Computer Science and Technology

Abstract

"Big data" applications have enabled powerful insights and technologies, through analytics and machine learning, by leveraging the wealth of data we are now able to collect, often on individuals. But this wealth of information dramatically increases privacy risks-even from anonymised or aggregated data, such information richness allows individual users to be deanonymised with ease, and valuable sensitive information to be gained about them.
Today, privacy is an increasing concern; users routinely share private or sensitive data to services without strong assurances about the privacy of their data, or control over how it is used. My goal is to provide strong privacy assurances to a user when sharing personal or sensitive data to a third party, while still enabling the powerful benefits of "big data" computation.

With current technologies, there is a tension between privacy and "big data" computation, as cloud applications are built to analyse large local-and usually plaintext-datasets. Users hand off sensitive data to a service, with the implicit trust that the data will be used responsibly. While this plaintext storage is a requirement for most models of analysis and machine learning, often relieves users of storage requirements, and allows companies to keep proprietary algorithms secret and running only on their hardware, it exposes the users to privacy risks; the user must assume the service can be trusted not to violate their privacy or anonymity (often poorly placed trust, in industries where user profiles are sold for advertising), and even if trust is given, data breaches are still a risk. It is intended to construct a system that allows users to interface with services without handing over sensitive information in plaintext, only handing over data that minimally conveys information.

There are a number of technologies identified as being promising for such a system. If adopting a maximally decentralised model of computation, the research would aim to bring the code to the data, rather than the other way around. The research will adapt proof-carrying-code schemes to ensure that received code is safe to execute on sensitive data, and use verification proofs to ensure the resulting data is safe to send off to an untrusted party (i.e. proving the low sensitive information content of the result).

A centralised model of computation, as in current cloud computing models, would aim to share encrypted data. This would require encryption schemes that allow useful computation on encrypted data, such as property-revealing or partially-homomorphic encryptions. Again, these might make use of verification proofs to limit the information present in the data, allowing the privacy risk of the sharing to be quantified, which might provide the basis for presentation (and control) of privacy to a user.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509620/1 01/10/2016 30/09/2022
1940975 Studentship EP/N509620/1 01/10/2017 30/09/2021 Jovan Powar
 
Description Nokia Bell Labs eSense project 
Organisation Nokia Research Centre Cambridge
Department Nokia Devices R&D
Country United Kingdom 
Sector Private 
PI Contribution Recommendations for data collection guidelines, appraisal of privacy risks of their data collection activity
Collaborator Contribution Facilitating access to a new network of data collection projects, early access to eSense project data collection hardware
Impact Presented a workshop paper on the issues surrounding data protection for the emerging research community around the eSense project. Made recommendations to attendees of the workshop who were pursuing research projects for ethical processes for handling the data collected.
Start Year 2019