A digital COgnitive architecture to achieve Rapid Task programming and flEXibility in manufacturing robots through human demonstrations (DIGI-CORTEX)

Lead Research Organisation: University of York
Department Name: Computer Science

Abstract

The Made Smarter review identified that the UK is lagging behind in worker productivity and could benefit from the advent of new industrial digital tools (IDTs) such as novel intelligent technologies, connected devices, robotics and artificial Intelligence. It is estimated that IDTs could contribute an additional £630bn to the UK economy by 2035 and increase the manufacturing sector growth by between 1.5 and 3% per annum [1]. For example, there is a high demand for bespoke and personalised goods in high volume [2]. In order to meet this demand, manufacturing systems need to be highly flexible, adaptable and highly automated. Since most manufacturing SMEs make use of jobshops and contribute up to 15% of the UK economy [3], equipping them with robots that can learn a task rapidly and flexibly (similar to how a human can be rapidly trained to assemble new product lines) will enable SMEs to meet high order demands thereby improving UK PLC's export opportunities and UK's GDP.

This proposal aims to investigate cognitive architectures that equips robots with the capability to rapidly learn new skills by passive observation of a human demonstrating a task to the robot and applying previously learnt skills to new task scenarios, thereby achieving task flexibility on the manufacturing floor. This opens up exciting possibilities. For one, it means that robots can be taught to do various tasks with no intensive programming required by a human. It also means that robots can be flexibly used to perform a wide variety of tasks thereby reducing the need for capital intensive, rigid and time-consuming manufacturing set ups.

There is a gap in literature of applying digital mental models on robots for building in flexible and creative robots that can be flexibly and rapidly re-tasked for various tasks. Nevertheless, there is a growing realisation that creativity is needed in industrial robots of the future and that this could be achieved through providing them with mental models [4]. For the first time ever, this proposal investigates a cognitive architecture that embeds the human cognitive capabilities of mental simulation for creative problem solving on manufacturing robots and task structure mapping in a unified framework for the purposes of achieving rapid re-tasking (task flexibility) of industrial robots via passive human demonstrations. State of the art architectures (such as SOAR and ART-R) often make use of a prior task informed rigid procedural rules that make them less amenable for exploring rapid re-tasking on robots while techniques that use machine learning paradigms (e.g deep neural networks or reinforcement learning) that require lots of data and result in task specific applications. Furthermore, these techniques are yet to be successfully combined with the creation of digital mental models through envisioning and applied to varying tasks in manufacturing environments similar to those to be investigated in this proposal. In summary, the novelty of this proposal is in the application of robot envisioned digital mental models to support them in creativity and imagination of morphological informed solutions to problems encountered in manufacturing (and other sectors outside manufacturing) as well as to support the application of previously learnt skills to new similar tasks. This will lead to rapid re-tasking and task flexibility in robots.

References:
[1] J. Maier, "Made Smarter Review," 2017.
[2] D. Brown, A. Swift, and E. Smart, "Data analytics and decision making," Inst. Ind. Res. Univ. Portsmouth, pp. 1-20, 2019, doi: 10.4324/9781315743011-9.
[3] C. Rhodes, "Business Statistics," 2019.
[4] J. B. Hamrick, "Analogues of mental simulation and imagination in deep learning," Current Opinion Behavioral Science, vol. 29, pp. 8-16, 2019.

Related Projects

Project Reference Relationship Related To Start End Award Value
EP/W014688/1 31/08/2022 31/01/2023 £325,453
EP/W014688/2 Transfer EP/W014688/1 01/02/2023 29/06/2026 £290,176
 
Description Innovation Launchpad Network
Amount £4,880,915 (GBP)
Funding ID EP/W037009/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 03/2022 
End 04/2026
 
Description Manufacturing Made Smarter Network+ (MMSN+)
Amount £100,763 (GBP)
Funding ID ES/V016555/1 
Organisation Economic and Social Research Council 
Sector Public
Country United Kingdom
Start 03/2021 
End 08/2021
 
Title Python library extension to control OnRobot grippers and UR5 
Description There are currently no python tools to control OnRobot grippers that can be used on UR5. The currently communication libraries are very cumbersome and require a high level of technical details. We created a Python library to make it easy to interact with these grippers. 
Type Of Material Improvements to research infrastructure 
Year Produced 2024 
Provided To Others? Yes  
Impact This was released in the past month (Feb 2024). This Python library would further support our work with industrial partners who make use of variants of the OnRobot grippers. 
URL https://pypi.org/project/onRobot/
 
Description EPSRC UK-RAS Network Plus 
Organisation University of Manchester
Country United Kingdom 
Sector Academic/University 
PI Contribution I am running the Early Careers Research Forum for the UK-RAS network as well as on the organising board of the UK-Wide Towards Autonomous Robotics Conference.
Collaborator Contribution Mentorship to support me in moving towards a mid-career academic.
Impact The network is resulting in closer collaborations between UK Universities in Robotics and Autonomous Systems. It is also enabling the training of future academic research leaders in the UK while forging closer ties between industry and academia for the exploitation of cutting edge research.
Start Year 2023
 
Description Flexible automation systems for Bosch 
Organisation Bosch Group
Department Bosch
Country Germany 
Sector Private 
PI Contribution We are transferring the outputs of our research: A learning from demonstration framework for adaptive task and motion planning in varying package-to-order scenarios Graph-based semantic planning for adaptive human-robot-collaboration in assemble-or-order scenarios Into the manufacturing environment of Bosch Thermotechnology (https://www.bosch-industrial.com/gb/en/commercial-industrial/home/). This would enable them to be able to quickly reconfigure their manufacturing lines for new products.
Collaborator Contribution Bosch is contributing staff time, as well as equipment, boilers and designs etc to support the translation of research into their environment.
Impact The paper: "Graph-based semantic planning for adaptive human-robot-collaboration in assemble-or-order scenarios" was a direct result of this collaboration. It also led to holding a workshop titled "Human-like computing for safe collaborative robots in manufacturing and healthcare" (https://humanlikecomputingroman2023.wordpress.com/organising-committee/) and giving talks at the RO-MAN 2023 conference (https://ro-man2023.org/main).
Start Year 2023
 
Description Flexible automation systems for Nestle 
Organisation Nestlé (Global)
Department Nestle PTC York
Country United Kingdom 
Sector Private 
PI Contribution In this collaboration, my research team is investigating the application of some of the techniques we developed in our paper "A learning from demonstration framework for adaptive task and motion planning in varying package-to-order scenarios" to achieve a flexible manufacturing system. We are in conversations to discuss the opportunities for funding to further exploit the work.
Collaborator Contribution Nestle has connected me with their research headquarters who allocate funding for research. I will be hearing back from them in about 4 weeks time. They have also invited me to give a presentation and demo at their logistic's site GXO in the east midlands. They will be providing the robots upon which to test our algorithms and approaches in a manufacturing environment.
Impact This collaboration is multi-disciplinary between computer science, manufacturing and engineering.
Start Year 2023
 
Title A deep multi-agent reinforcement learning framework for autonomous aerial navigation to grasping points on loads 
Description Deep reinforcement learning, by taking advantage of neural networks, has made great strides in the continuous control of robots. However, in scenarios where multiple robots are required to collaborate with each other to accomplish a task, it is still challenging to build an efficient and scalable multi-agent control system due to increasing complexity. In this paper, we regard each unmanned aerial vehicle (UAV) with its manipulator as one agent, and leverage the power of multi-agent deep deterministic policy gradient (MADDPG) for the cooperative navigation and manipulation of a load. We propose solutions for addressing navigation to grasping point problem in targeted and flexible scenarios, and mainly focus on how to develop model-free policies for the UAVs without relying on a trajectory planner. 
Type Of Technology New/Improved Technique/Technology 
Year Produced 2024 
Impact This would support our work with industrial partners in their factories of the future particularly in applying Reinforcement Learning to support the generation of hypothetical scenarios for their manufacturing lines. 
URL https://www.sciencedirect.com/science/article/pii/S0921889023001288
 
Title Behavioural Swarm Optimisation for load grasping and transportation 
Description Collaboration of multiple agents enables the accomplishment of tasks that would be difficult or impossible for a single agent to complete alone. In this work, we propose a hierarchical algorithmic architecture that supports the search and coverage of various unknown payload profiles for subsequent grasping and transportation. The proposed grasping formation satisfies static equilibrium thereby reducing energy usage during transportation. 
Type Of Technology New/Improved Technique/Technology 
Year Produced 2024 
Impact This is still being realized. 
URL https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10254023