ViTac: Visual-Tactile Synergy for Handling Flexible Materials
Lead Research Organisation:
King's College London
Department Name: Engineering
Abstract
Handling flexible materials is common in industrial, domestic and retail applications, e.g., evaluating new fabric products in the fashion industry, sorting clothes at home and replenishing shelves in a clothing store. Such tasks have been highly dependent on human labour and are still challenging for autonomous systems due to the complex dynamics of flexible materials. This proposal aims to develop a new visuo-tactile integration mechanism for estimating the dynamic states and properties of flexible materials while they are being manipulated by robot hands. This technique offers the potential to revolutionise the autonomous systems for handling flexible materials, allowing inclusion of their automated handling in larger automated production processes and process management systems. While the initial system to be developed in this work is for handling the textiles, the same technology would have the potential to be applied in handling other flexible materials including fragile products in the food industry, flexible objects in manufacturing and hazardous materials in healthcare.
Planned Impact
``AI and the data driven economy'' has been set out as the first of the four Grand Challenges in the UK government's Industrial Strategy. Robotics and Autonomous Systems have been identified as key priority areas to tackle this Grand Challenge. A lot of resources have been spent in the industry to manipulate and evaluate flexible products, as emphasised by Unilever, and handling such objects is becoming a bottleneck for including their automated handling in larger automated processes. The goal of this project is to provide transformative solutions that help robots to perform automated handling of flexible objects in the context of the manufacturing industry, which aligns well with the priority of the EPSRC "Productive nation" and the "Manufacturing the future'" theme.
The project contributes to the EPSRC research areas of Robotics and AI (RAI) technologies and follows its strategic focus on the accelerated deployment of RAI technologies in next-generation manufacturing: the developed handling system will be implemented in autonomous material assessment and handling, which will be used by Unilever and potentially other companies. It will also address the "Challenge: Deliver safer, smarter engineering" of the Alan Turing Institute.
This project complements existing EPSRC projects "MAN$^3", "RAIN", "NCNR", "FAIR-SPACE", "ORCA" on robotic handling of materials, by proposing the visuo-tactile coordination for handling flexible materials, and EPSRC projects "Tactile superresolution sensing" and "NeuPRINTSKIN" by extending the tactile sensing technologies to robotic handling applications. The project also complements EU projects "CloPeMa", "I-DRESS" and "CLOTHILDE", by combining vision and touch for clothes perception and manipulation.
The project contributes to the EPSRC research areas of Robotics and AI (RAI) technologies and follows its strategic focus on the accelerated deployment of RAI technologies in next-generation manufacturing: the developed handling system will be implemented in autonomous material assessment and handling, which will be used by Unilever and potentially other companies. It will also address the "Challenge: Deliver safer, smarter engineering" of the Alan Turing Institute.
This project complements existing EPSRC projects "MAN$^3", "RAIN", "NCNR", "FAIR-SPACE", "ORCA" on robotic handling of materials, by proposing the visuo-tactile coordination for handling flexible materials, and EPSRC projects "Tactile superresolution sensing" and "NeuPRINTSKIN" by extending the tactile sensing technologies to robotic handling applications. The project also complements EU projects "CloPeMa", "I-DRESS" and "CLOTHILDE", by combining vision and touch for clothes perception and manipulation.
Publications


Alexandridis K
(2022)
Inverse Image Frequency for Long-tailed Image Recognition

Alexandridis KP
(2023)
Inverse Image Frequency for Long-Tailed Image Recognition.
in IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

Butterworth A
(2023)
Leveraging Multi-modal Sensing for Robotic Insertion Tasks in R&D Laboratories


Cao G
(2024)
Multimodal zero-shot learning for tactile texture recognition
in Robotics and Autonomous Systems
Related Projects
Project Reference | Relationship | Related To | Start | End | Award Value |
---|---|---|---|---|---|
EP/T033517/1 | 11/04/2021 | 16/12/2021 | £402,545 | ||
EP/T033517/2 | Transfer | EP/T033517/1 | 17/12/2021 | 10/10/2024 | £311,503 |
Description | Through the work funded through the ViTac award, we have achieved the following key findings: 1. The camera based tactile sensors have been proven useful in capturing rich contact information with flexible materials, especially detailed textures, contact locations and forces in the sensor-object interactions. The GelTip sensor we have developed has the ability to show the seeds on the outside of strawberries, textures of fabrics and the deformation of the materials, when in contact with them. The TouchRoller sensor and the GelFinger finger extend our GelTip sensor in rapid assessment of large surfaces through a rolling mechanism or a motorised camera within the sensor. 2. The simulation of camera based tactile sensors have been shown effective in testing their performance before deployment in the real life experiments. The simulation models we have developed have pioneered the way of simulating different components of the camera based tactile sensors, from how the light travels in the soft elastomer of the sensor, to the contact dynamics of the sensor with the in-contact objects, and to different morphology of the sensor. They enable the experimentation of new designs for camera based tactile sensors and their integration with other robot components in dexterous tasks. 3. The integration of tactile sensing and vision can facilitated better perception of the states of flexible materials compared to a single sensing modality. Our experiments show that vision acts as the predominant sensing modality in perceiving the global state of the flexible materials, while tactile sensing could guide the robot hand better handle the flexible materials in hand through dynamic contact information. |
Exploitation Route | The outcomes of this funding could be taken forward and put to use through the following ways; 1. The sensor prototypes developed in the award, e.g., GelTip, TouchRoller, and GelFinger, could be adapted to fit applications like assessments of flexible materials in the manufacturing sector, or to improve the robots in handling challenging objects in warehouses and retail shops. 2. The simulation models developed in the award could be used to facilitate new designs of tactile sensors and testing them in dexterous tasks. 3. The algorithms developed in the award to integrate vision and tactile sensing could be leveraged to improve the robot perception and improve the robot control when handling challenging objects. |
Sectors | Digital/Communication/Information Technologies (including Software) Healthcare Manufacturing including Industrial Biotechology |
Description | Sensory evaluation involves a group of human testers evaluating new products via human senses like vision and touch. It is a necessary part of both product development and quality control for many companies. However, the sensory evaluation may have biases due to factors like the fatigue of the human testers. In addition, intensive training has to be given to the testers to develop requisite expertise of the job. Through this award, we have developed an autonomous handling system for estimating the properties of flexible materials, to evaluate new products. This project has helped companies to mitigate the biases in sensory evaluation of products, to hold their positions in intensified market competition and contribute to their growth. The findings of the award have been further developed and are currently being exploited by Unilever to enhance the evaluation of flexible materials treated by their products. |
Sector | Manufacturing, including Industrial Biotechology |
Impact Types | Economic |
Description | Collaboration with Southeast University, China |
Organisation | Southeast University China |
Country | China |
Sector | Academic/University |
PI Contribution | Tubular objects such as test tubes are common in chemistry and life sciences research laboratories, and robots that can handle them have the potential to accelerate experiments. Moreover, it is expected to train a robot to manipulate tubular objects in a simulator and then deploy it in a real-world environment. However, it is still challenging for a robot to learn to handle tubular objects through single sensing and bridge the gap between simulation and reality. In this collaboration, together with our partners, we propose a novel tactile-motor policy learning method to generalize tubular object manipulation skills from simulation to reality. In particular, we propose a Sim-to-Real transferable in-hand pose estimation network that generalizes to unseen tubular objects. The network utilizes a novel adversarial domain adaptation network to narrow the pixel-level domain gap for tactile tasks by introducing the attention mechanism and a task-related constraint. |
Collaborator Contribution | The in-hand pose estimation network is further implemented by our partners in a Reinforcement Learning-based policy learning framework for robotic insert-and-pullout manipulation tasks. The proposed method is applied to a human-robot collaborative tube placing scenario and a robotic pipetting scenario. The experimental results demonstrate the generalization capability of the learned tactile-motor policy toward tubular object manipulation in research laboratories. |
Impact | Yongqiang Zhao, Xingshuo Jing, Kun Qian, Daniel Fernandes Gomes, Shan Luo, Skill generalization of tubular object manipulation with tactile sensing and Sim2Real learning, Robotics and Autonomous Systems, Volume 160, 2023, 104321, ISSN 0921-8890, https://doi.org/10.1016/j.robot.2022.104321. |
Start Year | 2021 |
Description | Collaboration with Tsinghua University, China |
Organisation | Tsinghua University China |
Country | China |
Sector | Academic/University |
PI Contribution | Simulation is widely applied in robotics research to save time and resources. There have been several works to simulate optical tactile sensors that leverage either a smoothing method or Finite Element Method (FEM). However, elastomer deformation physics is not considered in the former method, whereas the latter requires a massive amount of computational resources like a computer cluster. In this work, we propose a pluggable and low computational cost simulator using the Taichi programming language for simulating optical tactile sensors, named as Tacchi . It reconstructs elastomer deformation using particles, which allows deformed elastomer surfaces to be rendered into tactile images and reveals contact information without suffering from high computational costs. Tacchi facilitates creating realistic tactile images in simulation, e.g., ones that capture wear-and-tear defects on object surfaces. |
Collaborator Contribution | Our collaborators integrated the proposed Tacchi with robotics simulators for a robot system simulation. Experiment results showed that Tacchi can produce images with better similarity to real images and achieved higher Sim2Real accuracy compared to the existing methods. Moreover, it can be connected with MuJoCo and Gazebo with only the requirement of 1G memory space in GPU compared to a computer cluster applied for FEM. With Tacchi, physical robot simulation with optical tactile sensors becomes possible. |
Impact | Chen, Z., Zhang, S., Luo, S., Sun, F. and Fang, B., 2023. Tacchi: A pluggable and low computational cost elastomer deformation simulator for optical tactile sensors. IEEE Robotics and Automation Letters, 8(3), pp.1239-1246. |
Start Year | 2022 |
Description | Collaboration with Unilever R&D Port Sunlight |
Organisation | Unilever |
Department | Unilever UK R&D Centre Port Sunlight |
Country | United Kingdom |
Sector | Private |
PI Contribution | Our team have worked with the researchers at Unilever Port Sunlight to apply our expertise in robot perception to assess fabric and hair care products with artificial touch feel. In this collaboration, our team have developed a bespoke vision based tactile sensor that can slip over the hair and fabric samples, like how a human finger does when assessing the touch feel of these objects, and collect data to analyse the mechanical properties of these objects and align with human touch feel attributes. |
Collaborator Contribution | In this collaboration, our partners at Unilever provided us their expertise and experience in assessing fabric and hair samples. When assessing new formulas of their fabric and hair care products, Unilever recruit a Human Sensory Panel to evaluate the properties of fabrics or hair samples after using the new formulas and score them with their touch feel. Our partners provided their data and experimental protocols in assessing the fabric or hair samples, as well as their expertise in analysing the data fro assessing these samples. |
Impact | - Sensor prototypes used for assessment of fabrics and hair samples used at Unilever Port Sunlight - Further funding from Unilever. - Datasets collected from the experiments in assessing fabrics and hair samples. |
Start Year | 2021 |
Description | Collaboration with University of Leeds, UK |
Organisation | University of Leeds |
Country | United Kingdom |
Sector | Academic/University |
PI Contribution | To assist robots in teleoperation tasks, haptic rendering which allows human operators access a virtual touch feeling has been developed in recent years. Most previous haptic rendering methods strongly rely on data collected by tactile sensors. However, tactile data is not widely available for robots due to their limited reachable space and the restrictions of tactile sensors. To eliminate the need for tactile data, in this paper we propose a novel method named as Vis2Hap to generate haptic rendering from visual inputs that can be obtained from a distance without physical interaction. We take the surface texture of objects as key cues to be conveyed to the human operator. To this end, a generative model is designed to simulate the roughness and slipperiness of the object's surface. To embed haptic cues in Vis2Hap, we use height maps from tactile sensors and spectrograms from friction coefficients as the intermediate outputs of the generative model. Once Vis2Hap is trained, it can be used to generate height maps and spectrograms of new surface textures, from which a friction image can be obtained and displayed on a haptic display. The user study demonstrates that our proposed Vis2Hap method enables users to access a realistic haptic feeling similar to that of physical objects. The proposed vision-based haptic rendering has the potential to enhance human operators' perception of the remote environment and facilitate robotic manipulation. |
Collaborator Contribution | Our collaborator at the University of Leeds Prof. Ningtao Mao is an expert in textile engineering, who provided us their expertise in designing the experiments with the haptic device for rendering textiles. |
Impact | Cao, G., Jiang, J., Mao, N., Bollegala, D., Li, M. and Luo, S., 2023, May. Vis2hap: Vision-based haptic rendering by cross-modal generation. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 12443-12449). |
Start Year | 2022 |
Description | 2021 UK-China Symposium on Advanced Manufacturing |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Around 100 researchers attended the UK-China Symposium on Advanced Manufacturing online. There were a few talks from different fields of robotics and manufacturing. I gave the keynote talk at the event. |
Year(s) Of Engagement Activity | 2021 |
Description | Invited keynote talk at the French research group in Robotics (GdR) open days |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to give a keynote talk at the French research group in Robotics (GdR) open days. Over 100 researchers attended the open days, which sparked questions and discussions afterwards, and the attendees reported fresh views on these research topics covered in the event. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.gdr-robotique.org/ |
Description | Invited talk at the Centre for Vision, Speech and Signal Processing (CVSSP), Surrey University |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | I gave an invited talk at the Centre for Vision, Speech and Signal Processing (CVSSP), Surrey University |
Year(s) Of Engagement Activity | 2021 |
Description | Invited talk at the Sino-EU Conference on Intelligent Robots and Automation, 2021 |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I gave an invited talk at the Sino-EU Conference on Intelligent Robots and Automation, 2021. |
Year(s) Of Engagement Activity | 2021 |
Description | Invited talk at the University of Birmingham |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | I gave an invited talk at the Intelligent Robotics lab at the University of Birmingham. |
Year(s) Of Engagement Activity | 2021 |
Description | Participation in an activity, workshop or similar - EPSRC Circular economy and ICT engagement and networking workshop |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | In this event, the information and communication technologies (ICT) and circular economy research communities were brought together, fostering relationships to encourage greater consideration of circularity and resource efficiency in ICT research and research outcomes, increase awareness and further develop understanding of the role of ICT research in achieving a circular economy and identify community highlights and future priorities to feed into related EPSRC and UK Research and Innovation (UKRI) strategy development and understand the barriers to delivering high impact, interdisciplinary research in this area. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.ukri.org/events/circular-economy-and-ict-engagement-and-networking-workshop/ |
Description | Participation in an activity, workshop or similar - Invited talk at the 4th UK Manipulation Workshop, 2023 |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Postgraduate students |
Results and Impact | I gave a talk on our research in visuo-tactile perception in this ViTac award at the 4th UK Manipulation Workshop |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.robot-manipulation.uk/ |
Description | Participation in an activity, workshop or similar - Invited talk at the 5th UK Manipulation Workshop, 2024 |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Postgraduate students |
Results and Impact | I gave a talk on our research in visuo-tactile perception in this ViTac award at the 5th UK Manipulation Workshop |
Year(s) Of Engagement Activity | 2024 |
URL | https://www.robot-manipulation.uk/ |
Description | Participation in an activity, workshop or similar - Invited talk at the Embodied Intelligence Conference, 2024 |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I gave a talk on our research in visuo-tactile perception in this ViTac award at the 2024 Embodied Intelligence Conference |
Year(s) Of Engagement Activity | 2024 |
URL | https://embodied-intelligence.org/ |
Description | Participation in an activity, workshop or similar - Invited talk at the RSS "Interdisciplinary Exploration of Generalizable Manipulation Policy Learning: Paradigms and Debates" workshop |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I gave a talk on our research in visuo-tactile perception in this ViTac award at the RSS "Interdisciplinary Exploration of Generalizable Manipulation Policy Learning: Paradigms and Debates" workshop |
Year(s) Of Engagement Activity | 2023 |
URL | https://ai-workshops.github.io/interdisciplinary-exploration-of-gmpl/ |
Description | Participation in an activity, workshop or similar - UK-RAS meeting |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | I participated the UK-RAS meeting in Leeds in 2022. |
Year(s) Of Engagement Activity | 2022 |
Description | Participation in an activity, workshop or similar - ViTac 2023: Blending Virtual and Real Visuo-Tactile Perception |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I organise the "ViTac 2023: Blending Virtual and Real Visuo-Tactile Perception" workshop at the International Conference on Robotics and Automation, London, 2023. I also give a talk at the workshop on our research done in the ViTac award. |
Year(s) Of Engagement Activity | 2023 |
URL | https://shanluo.github.io/ViTacWorkshops/ |
Description | Participation in an activity, workshop or similar - ViTac 2024: Towards Robot Embodiment with Visuo-Tactile Intelligence |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I organise the "ViTac 2024: Towards Robot Embodiment with Visuo-Tactile Intelligence" workshop at the International Conference on Robotics and Automation, Tokohama, Japan, 2024. I also give a talk at the workshop on our research done in the ViTac award. |
Year(s) Of Engagement Activity | 2024 |
URL | https://shanluo.github.io/ViTacWorkshops/ |
Description | ViTac 2021: Trends and Challenges in Visuo-Tactile Perception |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | In this one-day workshop we brought together experts from the diverse range of disciplines and encompass engineers, computer scientists, cognitive scientists and sensor developers, to discuss topics relating to fusion of vision and touch sensing, which is highly related to the ViTac project. I was the leading organiser of the workshop. |
Year(s) Of Engagement Activity | 2021 |
URL | http://wordpress.csc.liv.ac.uk/smartlab/icra-2021-vitac-workshop/ |