ViTac: Visual-Tactile Synergy for Handling Flexible Materials
Lead Research Organisation:
University of Liverpool
Department Name: Computer Science
Abstract
Handling flexible materials is common in industrial, domestic and retail applications, e.g., evaluating new fabric products in the fashion industry, sorting clothes at home and replenishing shelves in a clothing store. Such tasks have been highly dependent on human labour and are still challenging for autonomous systems due to the complex dynamics of flexible materials. This proposal aims to develop a new visuo-tactile integration mechanism for estimating the dynamic states and properties of flexible materials while they are being manipulated by robot hands. This technique offers the potential to revolutionise the autonomous systems for handling flexible materials, allowing inclusion of their automated handling in larger automated production processes and process management systems. While the initial system to be developed in this work is for handling the textiles, the same technology would have the potential to be applied in handling other flexible materials including fragile products in the food industry, flexible objects in manufacturing and hazardous materials in healthcare.
Planned Impact
``AI and the data driven economy'' has been set out as the first of the four Grand Challenges in the UK government's Industrial Strategy. Robotics and Autonomous Systems have been identified as key priority areas to tackle this Grand Challenge. A lot of resources have been spent in the industry to manipulate and evaluate flexible products, as emphasised by Unilever, and handling such objects is becoming a bottleneck for including their automated handling in larger automated processes. The goal of this project is to provide transformative solutions that help robots to perform automated handling of flexible objects in the context of the manufacturing industry, which aligns well with the priority of the EPSRC "Productive nation" and the "Manufacturing the future'" theme.
The project contributes to the EPSRC research areas of Robotics and AI (RAI) technologies and follows its strategic focus on the accelerated deployment of RAI technologies in next-generation manufacturing: the developed handling system will be implemented in autonomous material assessment and handling, which will be used by Unilever and potentially other companies. It will also address the "Challenge: Deliver safer, smarter engineering" of the Alan Turing Institute.
This project complements existing EPSRC projects "MAN$^3", "RAIN", "NCNR", "FAIR-SPACE", "ORCA" on robotic handling of materials, by proposing the visuo-tactile coordination for handling flexible materials, and EPSRC projects "Tactile superresolution sensing" and "NeuPRINTSKIN" by extending the tactile sensing technologies to robotic handling applications. The project also complements EU projects "CloPeMa", "I-DRESS" and "CLOTHILDE", by combining vision and touch for clothes perception and manipulation.
The project contributes to the EPSRC research areas of Robotics and AI (RAI) technologies and follows its strategic focus on the accelerated deployment of RAI technologies in next-generation manufacturing: the developed handling system will be implemented in autonomous material assessment and handling, which will be used by Unilever and potentially other companies. It will also address the "Challenge: Deliver safer, smarter engineering" of the Alan Turing Institute.
This project complements existing EPSRC projects "MAN$^3", "RAIN", "NCNR", "FAIR-SPACE", "ORCA" on robotic handling of materials, by proposing the visuo-tactile coordination for handling flexible materials, and EPSRC projects "Tactile superresolution sensing" and "NeuPRINTSKIN" by extending the tactile sensing technologies to robotic handling applications. The project also complements EU projects "CloPeMa", "I-DRESS" and "CLOTHILDE", by combining vision and touch for clothes perception and manipulation.
Publications



Chen Z
(2023)
Tacchi: A Pluggable and Low Computational Cost Elastomer Deformation Simulator for Optical Tactile Sensors
in IEEE Robotics and Automation Letters

Gomes D
(2021)
Generation of GelSight Tactile Images for Sim2Real Learning

Gomes D
(2021)
Generation of GelSight Tactile Images for Sim2Real Learning
in IEEE Robotics and Automation Letters

Jiang J
(2022)
A4T: Hierarchical Affordance Detection for Transparent Objects Depth Reconstruction and Manipulation
in IEEE Robotics and Automation Letters

Jiang J
(2023)
Where Shall I Touch? Vision-Guided Tactile Poking for Transparent Object Grasping
in IEEE/ASME Transactions on Mechatronics

Li M
(2023)
A CNN-LSTM model for six human ankle movements classification on different loads.
in Frontiers in human neuroscience

Li M
(2022)
Facial Expressions-Controlled Flight Game With Haptic Feedback for Stroke Rehabilitation: A Proof-of-Concept Study
in IEEE Robotics and Automation Letters
Related Projects
Project Reference | Relationship | Related To | Start | End | Award Value |
---|---|---|---|---|---|
EP/T033517/1 | 11/04/2021 | 16/12/2021 | £402,545 | ||
EP/T033517/2 | Transfer | EP/T033517/1 | 17/12/2021 | 10/10/2024 | £311,503 |
Description | In this award, we have developed a set of sensor prototypes as well as algorithms to enable robots to handle flexible flexible materials, which is a common task in industrial, domestic and retail applications. First of all, we have developed several vision-based tactile sensor prototypes named GelTip sensors. The GelTip uses a camera to capture the deformation of the soft membrane above it. Through this design, the GelTip can capture the detailed information of the flexible materials in contact with the sensor, such as textures, force distribution and geometry. We have investigated different form factors of the sensor so that it can be suitable for handling flexible materials. The sensor has also been integrated with the vision system of the robot so that the robot can have a visuo-tactile sensing system to perceive better the state of the flexible materials. Furthermore, we have developed a simulation model for the GelTip sensors so that we can train the robot agents in simulation before deploying them in the real experiments. This accelerates the training of robot agents and saves potential risks of damaging the tactile sensing in excessive use of the sensors in experiments. In addition, we have developed a set of algorithms for robot perception of flexible materials as well as better planning of manipulation actions in handling them. We have created a benchmark to compare the roles of vision, proprioceptive sensing and tactile sensing in these tasks. We have designed Reinforcement Learning agents to learn from these data to manipulate flexible objects. Interestingly, we have found that vision has played a significant role in perceiving the state of the flexible object, while tactile sensing at the robot gripper provides better cues for fine manipulation of these objects. |
Exploitation Route | The outcomes of this funding would be of interest to the researchers and practitioners, who work with flexible materials, sensor development, autonomous systems and artificial intelligence algorithms. Our sensor prototypes and algorithms developed in this award can be used in various applications, e.g., using robots to evaluate new fabric products in the fashion industry, sort clothes at home and replenish shelves in a clothing store. |
Sectors | Agriculture Food and Drink Construction Digital/Communication/Information Technologies (including Software) Electronics Healthcare Manufacturing including Industrial Biotechology Pharmaceuticals and Medical Biotechnology Retail |
Description | Sensory evaluation involves a group of human testers evaluating new products via human senses like vision and touch. It is a necessary part of both product development and quality control for many companies. However, the sensory evaluation may have biases due to factors like the fatigue of the human testers. In addition, intensive training has to be given to the testers to develop requisite expertise of the job. Through this award, we have developed an autonomous handling system for estimating the properties of flexible materials, to evaluate new products. This project has helped companies to mitigate the biases in sensory evaluation of products, to hold their positions in intensified market competition and contribute to their growth. |
First Year Of Impact | 2024 |
Sector | Manufacturing, including Industrial Biotechology |
Impact Types | Economic |
Description | Collaboration with Southeast University, China |
Organisation | Southeast University China |
Country | China |
Sector | Academic/University |
PI Contribution | Tubular objects such as test tubes are common in chemistry and life sciences research laboratories, and robots that can handle them have the potential to accelerate experiments. Moreover, it is expected to train a robot to manipulate tubular objects in a simulator and then deploy it in a real-world environment. However, it is still challenging for a robot to learn to handle tubular objects through single sensing and bridge the gap between simulation and reality. In this collaboration, together with our partners, we propose a novel tactile-motor policy learning method to generalize tubular object manipulation skills from simulation to reality. In particular, we propose a Sim-to-Real transferable in-hand pose estimation network that generalizes to unseen tubular objects. The network utilizes a novel adversarial domain adaptation network to narrow the pixel-level domain gap for tactile tasks by introducing the attention mechanism and a task-related constraint. |
Collaborator Contribution | The in-hand pose estimation network is further implemented by our partners in a Reinforcement Learning-based policy learning framework for robotic insert-and-pullout manipulation tasks. The proposed method is applied to a human-robot collaborative tube placing scenario and a robotic pipetting scenario. The experimental results demonstrate the generalization capability of the learned tactile-motor policy toward tubular object manipulation in research laboratories. |
Impact | Yongqiang Zhao, Xingshuo Jing, Kun Qian, Daniel Fernandes Gomes, Shan Luo, Skill generalization of tubular object manipulation with tactile sensing and Sim2Real learning, Robotics and Autonomous Systems, Volume 160, 2023, 104321, ISSN 0921-8890, https://doi.org/10.1016/j.robot.2022.104321. |
Start Year | 2021 |
Description | Collaboration with Tsinghua University, China |
Organisation | Tsinghua University China |
Country | China |
Sector | Academic/University |
PI Contribution | Simulation is widely applied in robotics research to save time and resources. There have been several works to simulate optical tactile sensors that leverage either a smoothing method or Finite Element Method (FEM). However, elastomer deformation physics is not considered in the former method, whereas the latter requires a massive amount of computational resources like a computer cluster. In this work, we propose a pluggable and low computational cost simulator using the Taichi programming language for simulating optical tactile sensors, named as Tacchi . It reconstructs elastomer deformation using particles, which allows deformed elastomer surfaces to be rendered into tactile images and reveals contact information without suffering from high computational costs. Tacchi facilitates creating realistic tactile images in simulation, e.g., ones that capture wear-and-tear defects on object surfaces. |
Collaborator Contribution | Our collaborators integrated the proposed Tacchi with robotics simulators for a robot system simulation. Experiment results showed that Tacchi can produce images with better similarity to real images and achieved higher Sim2Real accuracy compared to the existing methods. Moreover, it can be connected with MuJoCo and Gazebo with only the requirement of 1G memory space in GPU compared to a computer cluster applied for FEM. With Tacchi, physical robot simulation with optical tactile sensors becomes possible. |
Impact | Chen, Z., Zhang, S., Luo, S., Sun, F. and Fang, B., 2023. Tacchi: A pluggable and low computational cost elastomer deformation simulator for optical tactile sensors. IEEE Robotics and Automation Letters, 8(3), pp.1239-1246. |
Start Year | 2022 |
Description | Collaboration with Unilever R&D Port Sunlight |
Organisation | Unilever |
Department | Unilever UK R&D Centre Port Sunlight |
Country | United Kingdom |
Sector | Private |
PI Contribution | Our team have worked with the researchers at Unilever Port Sunlight to apply our expertise in robot perception to assess fabric and hair care products with artificial touch feel. In this collaboration, our team have developed a bespoke vision based tactile sensor that can slip over the hair and fabric samples, like how a human finger does when assessing the touch feel of these objects, and collect data to analyse the mechanical properties of these objects and align with human touch feel attributes. |
Collaborator Contribution | In this collaboration, our partners at Unilever provided us their expertise and experience in assessing fabric and hair samples. When assessing new formulas of their fabric and hair care products, Unilever recruit a Human Sensory Panel to evaluate the properties of fabrics or hair samples after using the new formulas and score them with their touch feel. Our partners provided their data and experimental protocols in assessing the fabric or hair samples, as well as their expertise in analysing the data fro assessing these samples. |
Impact | - Sensor prototypes used for assessment of fabrics and hair samples used at Unilever Port Sunlight - Further funding from Unilever. - Datasets collected from the experiments in assessing fabrics and hair samples. |
Start Year | 2021 |
Description | Collaboration with University of Leeds, UK |
Organisation | University of Leeds |
Country | United Kingdom |
Sector | Academic/University |
PI Contribution | To assist robots in teleoperation tasks, haptic rendering which allows human operators access a virtual touch feeling has been developed in recent years. Most previous haptic rendering methods strongly rely on data collected by tactile sensors. However, tactile data is not widely available for robots due to their limited reachable space and the restrictions of tactile sensors. To eliminate the need for tactile data, in this paper we propose a novel method named as Vis2Hap to generate haptic rendering from visual inputs that can be obtained from a distance without physical interaction. We take the surface texture of objects as key cues to be conveyed to the human operator. To this end, a generative model is designed to simulate the roughness and slipperiness of the object's surface. To embed haptic cues in Vis2Hap, we use height maps from tactile sensors and spectrograms from friction coefficients as the intermediate outputs of the generative model. Once Vis2Hap is trained, it can be used to generate height maps and spectrograms of new surface textures, from which a friction image can be obtained and displayed on a haptic display. The user study demonstrates that our proposed Vis2Hap method enables users to access a realistic haptic feeling similar to that of physical objects. The proposed vision-based haptic rendering has the potential to enhance human operators' perception of the remote environment and facilitate robotic manipulation. |
Collaborator Contribution | Our collaborator at the University of Leeds Prof. Ningtao Mao is an expert in textile engineering, who provided us their expertise in designing the experiments with the haptic device for rendering textiles. |
Impact | Cao, G., Jiang, J., Mao, N., Bollegala, D., Li, M. and Luo, S., 2023, May. Vis2hap: Vision-based haptic rendering by cross-modal generation. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 12443-12449). |
Start Year | 2022 |
Description | 2021 UK-China Symposium on Advanced Manufacturing |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Around 100 researchers attended the UK-China Symposium on Advanced Manufacturing online. There were a few talks from different fields of robotics and manufacturing. I gave the keynote talk at the event. |
Year(s) Of Engagement Activity | 2021 |
Description | Invited keynote talk at the French research group in Robotics (GdR) open days |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to give a keynote talk at the French research group in Robotics (GdR) open days. Over 100 researchers attended the open days, which sparked questions and discussions afterwards, and the attendees reported fresh views on these research topics covered in the event. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.gdr-robotique.org/ |
Description | Invited talk at the Centre for Vision, Speech and Signal Processing (CVSSP), Surrey University |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | I gave an invited talk at the Centre for Vision, Speech and Signal Processing (CVSSP), Surrey University |
Year(s) Of Engagement Activity | 2021 |
Description | Invited talk at the Sino-EU Conference on Intelligent Robots and Automation, 2021 |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I gave an invited talk at the Sino-EU Conference on Intelligent Robots and Automation, 2021. |
Year(s) Of Engagement Activity | 2021 |
Description | Invited talk at the University of Birmingham |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | I gave an invited talk at the Intelligent Robotics lab at the University of Birmingham. |
Year(s) Of Engagement Activity | 2021 |
Description | Participation in an activity, workshop or similar - EPSRC Circular economy and ICT engagement and networking workshop |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | In this event, the information and communication technologies (ICT) and circular economy research communities were brought together, fostering relationships to encourage greater consideration of circularity and resource efficiency in ICT research and research outcomes, increase awareness and further develop understanding of the role of ICT research in achieving a circular economy and identify community highlights and future priorities to feed into related EPSRC and UK Research and Innovation (UKRI) strategy development and understand the barriers to delivering high impact, interdisciplinary research in this area. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.ukri.org/events/circular-economy-and-ict-engagement-and-networking-workshop/ |
Description | Participation in an activity, workshop or similar - Invited talk at the 4th UK Manipulation Workshop, 2023 |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Postgraduate students |
Results and Impact | I gave a talk on our research in visuo-tactile perception in this ViTac award at the 4th UK Manipulation Workshop |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.robot-manipulation.uk/ |
Description | Participation in an activity, workshop or similar - Invited talk at the Embodied Intelligence Conference, 2024 |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I gave a talk on our research in visuo-tactile perception in this ViTac award at the 2024 Embodied Intelligence Conference |
Year(s) Of Engagement Activity | 2024 |
URL | https://embodied-intelligence.org/ |
Description | Participation in an activity, workshop or similar - Invited talk at the RSS "Interdisciplinary Exploration of Generalizable Manipulation Policy Learning: Paradigms and Debates" workshop |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I gave a talk on our research in visuo-tactile perception in this ViTac award at the RSS "Interdisciplinary Exploration of Generalizable Manipulation Policy Learning: Paradigms and Debates" workshop |
Year(s) Of Engagement Activity | 2023 |
URL | https://ai-workshops.github.io/interdisciplinary-exploration-of-gmpl/ |
Description | Participation in an activity, workshop or similar - UK-RAS meeting |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | I participated the UK-RAS meeting in Leeds in 2022. |
Year(s) Of Engagement Activity | 2022 |
Description | Participation in an activity, workshop or similar - ViTac 2023: Blending Virtual and Real Visuo-Tactile Perception |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I organise the "ViTac 2023: Blending Virtual and Real Visuo-Tactile Perception" workshop at the International Conference on Robotics and Automation, London, 2023. I also give a talk at the workshop on our research done in the ViTac award. |
Year(s) Of Engagement Activity | 2023 |
URL | https://shanluo.github.io/ViTacWorkshops/ |
Description | Participation in an activity, workshop or similar - ViTac 2024: Towards Robot Embodiment with Visuo-Tactile Intelligence |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I organise the "ViTac 2024: Towards Robot Embodiment with Visuo-Tactile Intelligence" workshop at the International Conference on Robotics and Automation, Tokohama, Japan, 2024. I also give a talk at the workshop on our research done in the ViTac award. |
Year(s) Of Engagement Activity | 2024 |
URL | https://shanluo.github.io/ViTacWorkshops/ |
Description | ViTac 2021: Trends and Challenges in Visuo-Tactile Perception |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | In this one-day workshop we brought together experts from the diverse range of disciplines and encompass engineers, computer scientists, cognitive scientists and sensor developers, to discuss topics relating to fusion of vision and touch sensing, which is highly related to the ViTac project. I was the leading organiser of the workshop. |
Year(s) Of Engagement Activity | 2021 |
URL | http://wordpress.csc.liv.ac.uk/smartlab/icra-2021-vitac-workshop/ |