Game Theory for Cyber Security

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

This project falls within the EPSRC research theme of Information and Communication Technologies and aims at studying techniques for applying computational game theory to addressing issues in cyber security.

Computational game theory (CGT) is a research field in the intersection of game theory and computer science. While classical game theory has provided rich mathematical foundations and equilibrium concepts for modeling and understanding strategic interactions between self-interested agents, CGT focuses on the computational aspects and asks in general how to design efficient algorithms to compute equilibrium solutions. With its help, quantitative analyses have been made available, extending existing results to more constructive ones that are able to predict strategic behaviors or provide guidance for strategic decision-making in real-world scenarios.

Recent research in CGT has found many successful applications in the security domain. These are known as security games. The key problem in security games is how to allocate limited defense resources (e.g., police guards) to protect potential targets (e.g., ports, trains, wildlife) against strategic attackers (e.g., terrorists, poachers). To come up with the optimal solution, security events are modeled as leader-follower games played between a defender (the leader) and an attacker (the follower). Algorithms have been designed to compute the equilibria of such games. Systems applying these algorithms have been developed and deployed in many real security settings, including the PROTECT system for the US Coast Guard, IRIS for the US Federal Air Marshals service, and the PAWS system for wildlife protection.

Given these achievements, it is plausible to consider applying CGT to tackle security issues in the cyber domain. Cyber systems are ubiquitous in today's world with more than 5 billion connected devices in 2015 and a prediction of 20 billion by 2020. The large pool of targets constitutes a large playground for cyber attackers. Improving cyber security is therefore imperative. Unfortunately, migrating the concepts and techniques of existing security games to cyber security faces many challenges due to novelties of cyber problems: (1) Cyber scenarios may involve multiple (more than two) parties/players. Network users, who are free to make their decisions and whose interests align with none of the defender's or the attacker's, might be treated as another party in cyber games. With these players added, the interactions become quite complicated. Issues such as bounded rationality of network users may also arise. (2) Complex network structures are often embedded in cyber problems. Unlike networks in the physical world which are mostly planar, cyber networks can have higher dimensions. In addition, cyber networks can be much larger, easily involving thousands of distributed agents, whereas in many physical scenarios this already exceeds the realistic size. The large input size can make even simple problems computationally infeasible to solve. (3) The frequency of cyber attacks is higher than physical attacks, and the cost is much lower. As a result, the one-shot game model widely used in existing research might be inappropriate in cyber domain.

With these challenges in mind, in this project we will particularly focus on: (1) Modeling cyber security games, including building new models of security games that are appropriate for use in cyber scenarios, such as games involving multiple parties; and modeling human behaviors, such as bounded rationality of human players. (2) Designing scalable solution algorithms, including establishing results of computational complexity, and exploring approximate or heuristic algorithms to overcome the complexity barrier. Given the foreseeable computational complexity of cyber security games, it it is also practical to study adversaries with limited computational capability, which is another unexplored area in the literature.

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509711/1 01/10/2016 30/09/2021
1735490 Studentship EP/N509711/1 03/10/2016 31/03/2020 Jiarui Gan
 
Description Information uncertainty is a key challenge facing applications of game theory. In a Stackelberg game, a leader makes a commitment to the strategy they are going to play and a follower best responds to this commitment. To compute the optimal strategy for the leader to commit to, one needs to know about the follower's payoffs to different outcomes of the game (and hence how the follower will best respond in the game), but this information is often missing or uncertain in reality. To deal with this issue, one approach that has been studied extensively is to learn the optimal commitment by interacting with the follower: by trying a series of strategies and learning from the follower's responses. Unfortunately, if a follower is aware of the leader's learning attempt, they can easily manipulate this learning approach by sending fake best responses to the leader - in particular, by imitating the responses of a different follower type. As we show in our paper, a follower may benefit significantly from such manipulation, while it often results in the leader learning a highly suboptimal strategy. We then ask a fundamental question: how to counteract manipulation against a learning leader?

In our paper, we put forward a game-theoretic framework at a higher level, in which the leader commits to a policy-based strategy, and show that the leader is indeed able to reduce loss due to untruthful follower response by using carefully designed policies. We then provide a systematic study on the problem of computing
the optimal leader policy. Our results draw an almost complete picture of the complexity landscape. On the one hand, to compute the optimal leader policy in our framework is computationally hard even in the approximate sense. On the other hand, when furthering our study in Stackelberg security games in another paper [9], we found an efficient algorithm to accomplish this task. We also found that, interestingly, in security games it is almost always optimal for the follower to pretend that the game they are playing is zero-sum; the leader will be led to learn her maximin strategy as a result-essentially, no additional information is learned by the leader because of the manipulation as to compute a maximin strategy does not require the opponent's payoff information.
Exploitation Route Our findings entail a series of research questions calling for deeper understandings about manipulation against a learning player in a broader scope of areas. Our results can be used to remedy the loophole of learning approaches used to find the optimal commitment in Stackelberg games. The model of Stackelberg games dates back to 1934 when the
German economist Stackelberg introduced it to model market competitions between a leader and follower firms. A number of recent applications employ Stackelberg games to study security resource allocation problems in strategic scenarios and have been successfully put in use. Our results can help improve the robustness of these applications. Besides that, most mechanism design problems can also be viewed as Stackelberg games at a high level, where the mechanism designer plays with rational agents who best responds to the mechanism. The same phenomenon may therefore arise in these problems where our results may apply.
Sectors Creative Economy,Security and Diplomacy