Dynamic Architectures for Computational Self-Awareness

Lead Research Organisation: Aston University
Department Name: Sch of Engineering and Applied Science

Abstract

There has been growing interest in recent years regarding the exploration of technical systems and
how they can express autonomous behaviour as a way of addressing a wide range of challenges;
modern systems are being used in increasingly diverse, unpredictable and dynamic environments,
creating a need for systems to overcome unknown states and situations through reasoning and
learning - without human intervention. As the environments and the goals of the systems are more
commonly subject to change over time, it is becoming impossible for programmers to know the
"correct" approach to solving run-time problems whilst designing the systems themselves; it is
therefore increasingly important for systems to have the ability to autonomously adapt to respond
to these changes in an effective and efficient manner. One such approach to tackling these issues is designing systems with computational self-awareness capabilities, so they can learn and act without
any prior knowledge, whilst extending their skills over time.
With this in mind, this project therefore aims to look at how self-aware computing systems can be
designed to operate in an ever-changing environment, specifically looking at the additional
complexity of interactions between systems and how these can be engineered to be efficient and
meaningful. Further, the focus is on system to system interactions, rather than the larger area of
sociotechnical systems where there are complex interactions between systems and humans alike.

One question this project aims to address surrounds how systems are able to learn and achieve their
own goals in an environment where their own actions are affected by another - either implicitly or explicitly. Researchers have asked questions about issues of integration which considers explicit
issues in the design of the system itself; this project however aims to investigate the unintended
interactions that may arise between two interacting systems, and how systems can be designed to
combat these issues autonomously. This involves consideration of how one models both itself and
others, and how these so-called "social systems" are able to develop a deeper understanding of their place in the world around them, as well as the impact others can have on themselves. Theories of human psychology and sociology will be used as inspiration to answer these questions.
A further question that arises from this regards the learning processes themselves that a system can exhibit, and how these relate to the context systems operate in. For example, the outcome of a certain behaviour can change depending on the context/situation; if the system is not complex enough to perceive the cause of this change (i.e. the actions of another, a change in the environment), it will appear to the system that there is no reason for this change, resulting in the system changing its knowledge accordingly. A consequence of this is that well-learnt and successful knowledge and behaviour may be changed or lost due to unknown interactions. This project therefore also aims to investigate whether the learning processes of systems can be modularised, such that contextual change does not interfere with learnt knowledge. This way, systems may be able to interact more effectively due to a more robust learning approach.

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509425/1 30/09/2016 29/09/2021
1926505 Studentship EP/N509425/1 30/06/2017 30/05/2021 Chloe Barnes
 
Description My most recent achievements to date are publishing results in a conference paper (SASO 2019), and within the past month I have also submitted a manuscript with more detailed results and analysis for publication in a special issue of the journal Future Generation Computer Systems. These papers explore the increasing need for complex systems (be that of smart cities, vehicle networks, multi-agent systems, or components within a system with different goals) to be equipped to combat changes in their environment with respect to them being able to learn to achieve their goals - especially in the case of systems that share an environment with other systems. We demonstrate that it can be beneficial for social systems (systems that try to achieve goals, whilst sharing an environment with other systems) to use an approach inspired by Social Action Theory, by using a combination of both traditional and goal-rational behaviour, instead of the purely goal-rational behaviour typically seen in current systems.

This has led to the formation of new academic connections, and a collaboration between Aston University, and researchers at the University of Oslo. We hope to combine our expertise and explore this avenue further in the coming months.

Additionally, I was invited in January 2020 to present this research as a guest speaker at BrumAI - a local tech meetup in Birmingham. As such, I had the opportunity to increase the impact of this work by presenting to the general public, and demonstrating the value of my work in a broader context.
Exploitation Route The outcomes of this research so far could be used by designers of complex systems, such as those working on developing smart cities, trading agents, networks of autonomous vehicles, and other such socio-technical systems with many different devices that each have their own goals to achieve. The research looks at helping "social" systems learn to achieve their goals, despite others doing the same thing around them.

In an academic sense, I intend to explore other types of social action in computer systems in the future, to investigate their impact on how systems perform and behave in shared environments, as well as how and what they learn. So far, the research looks at "agents" (software entities with a goal to achieve) that are alone in an environment, or that share an environment with one other agent. A future avenue of this work would therefore be to explore the dynamics of environments with many agents (i.e. more than two). Questions here would surround how these agents achieve goals, what they learn, whether they are able to cooperate to benefit each other, and whether social action theory improves performance and learning as it does currently with single and pairs of agents.
Sectors Digital/Communication/Information Technologies (including Software)