Deep Discoveries

Lead Research Organisation: National Archives
Department Name: Collection Care

Abstract

Couched in a network of research initiatives under the umbrella theme 'Towards a National Collection: Opening UK Heritage to the World', the Deep Discoveries project aims to contribute to the creation of a unified national heritage collection by creating a transformative image searching platform. Our goal is to design a prototype app enabling cross-collection image linking by harnessing the ability of computer vision and deep learning methods to identify and recognise specific patterns without the need for preliminary integrated descriptive metadata. The project aims to create a radical shift in content discovery within and between our nation's digitised image repositories, allowing users the opportunity to dissolve established physical and virtual barriers between these collections, opening cross-disciplinary modes of research and engagement and generating new and unforeseen connections leading to user-generated, disruptive, and (re)defined notions of 'national' heritage.
Through employing the catalytic potential of ever more socially-integrated artificial intelligence (AI) technologies for the benefit of opening up our national heritage collections and radically diversifying our visitor base, the project will ensure that these tools are directed toward the enhancement of the heritage economy and the wider social good. Crucially, the nature of deep learning architectures future-proofs the search platform, as it will be able to evolve and improve as the underpinning dataset grows. If successful, the creation of an image search platform able to continuously integrate new digital image repositories as they are generated by GLAM-sector organisations will have enormous benefits in making collections networked and openly discoverable across our virtual heritage landscape. Such an advancement will demonstrate the UK's commitment to cutting-edge technologies and shift the view of the museum/archive from a historical repository to a space of dynamic and emergent practices, inviting diverse users to weave new narratives from our collections.
To achieve our ambitious goals, we have formed a cross-disciplinary network of diverse institutional partners: the core team is formed by researchers from the University of Surrey's Centre for Vision Speech and Signal Processing, the Collection Care and Research departments of The National Archives, the Royal Botanical Gardens Edinburgh, and the V&A Research Institute. We are also joined by three Project Partners who will support the project through offering access to parts of their digitised image collections. Our partners include Gainsborough Weaving Studio, the Sanderson Design Archive, and the Museum of Domestic Design and Architecture. We will work together through a series of workshops, some of which will be jointly held with other projects from the Towards a National Collection programme. By surveying the current landscape of digital image users and working with a variety of stakeholders (public, academic, institutional) we will work to iteratively design a visual search platform that truly Opens up UK Heritage to the World.

Planned Impact

As a Foundational Project within the Towards a National Collection programme, Deep Discoveries offers initial research into a fundamentally transformational and disruptive new framework for networking and discovering digitised image collections across the nation. The project aims to develop a visual search technology able to integrate a variety of image collections, structured or unstructured, breaking down geospatial boundaries and offering both institutional and public stakeholders a new pathway to accessing digitised and born-digital visual collections. As all projects within the programme, Deep Discoveries will employ advances in technology combined with innovative methodologies for assessing user engagement and audience diversification to establish the UK as a leader in networking its heritage collections in the virtual realm. In doing so, the project, and the programme as a whole will work to foster UK innovation in the cultural sector, economic and technological competitiveness, and the deployment of advanced digital methods for social and public services.
The project will benefit public end-users of the developed search platform by offering new ways of accessing and discovering collections. Curators will benefit from a technology allowing them to reach new audiences. Both of these outcomes have clear benefits for GLAM sector organisations, by helping to broaden current audiences and make use of digitised content through new and fruitful practices. The cross-disciplinary and diverse network at the core of the project will help to form strategic and lasting relationships between IROs and the higher education sector, key in ensuring continual success of research programmes conducted in organisations like The National Archives, the Royal Botanical Gardens Edinburgh, and the V&A. Furthermore, our project will build bridges with private sector companies which also hold large digitised image collections. Academic researchers interested in both STEM and humanities fields will benefit through the opening up of new avenues for research, whilst education professionals and students will be able to create exciting connections within image collections.
A core contribution of the project will be the generation of a final report outlining the State of the Art in Computer Vision Searching for Heritage Collections, which will have significant impact on shaping future research in this area. This report will be used to provide clear evidence-based policy recommendations on next steps in the field. To ensure that all of the mentioned stakeholders benefit from our work, we will develop the search technology platform through a user-centric R&D programme with the aid of a UX consultant. We will share insights and assess the state and availability of currently digitised visual collections, invite diverse stakeholders to test the proof-of-concept technology and feed into its modifications, and share the outcomes of the workshops via the final report, to enable beneficiaries and their associates to action the insights generated. We will also ensure dissemination of the project through open access peer-reviewed publications, presentations at conferences, project updates and blogs for public audiences.

Publications

10 25 50
publication icon
Lora Angelova (2021) Deep Discoveries Final Report

 
Description Deep Discoveries was a Towards a National Collection Foundation Project exploring the application of computer vision (CV) and explainable artificial intelligence (XAI) methods for enhancing the ability of general audiences and specialist researchers to discover visual collections in new and/or more effective ways. The team developed and user-tested a CV-based search platform that allowed users to visually articulate their search task, understand how the CV algorithm found similarity between their input image and the returned image results, and to carry out a 'visual dialogue' with the AI to refine their search further.
The UX research suggested that the CV search prototype should provide both discovery-driven and research-specific capabilities, and guide users to understand how these operate. The prototype should allow users to articulate their search criteria based on visual facets, though questions around user intent and training of the search algorithm would require the team to investigate the meaning of terms such as 'motif' or 'style' for different user groups. We instead chose to untether the technology from broadly defined notions of visual facets and create a platform that allows users to visualise how the AI determined similarity between their query image and the returned images. Thereafter, the users can select areas of interest on the result and query images in an iterative fashion in order to visually articulate their task and discover new content or hone-in on specific images.
We investigated several models for image feature extraction to do the visual search: three network architectures trained for semantic classification on ImageNet dataset; three style based models trained to discriminate fine-grained style collected from behance.net. We found that features extracted from all these models are suitable for our dataset. We adopted the GradCam method to explain the visual search results, which enabled us to present heatmaps to users that highlight image regions responsible for image matching. Next we introduced a patch-based retrieval approach for visual search. The key advantage of this approach is that users are able to take local feedback from the retrieved image (as a mask drawn using a brush-like tool) and the system can incorporate this feedback into the current search. This not only allows visual discovery within the image collection but also helps to disentangle the user's intention during visual search.
Unmoderated testing of the prototype showed that close to 85% of respondents agree that exploring using an image, rather than a keyword, would be useful when engaging with GLAM collections online, and all agreed that visual search would be a helpful tool. The majority of testers understood the XAI function of the prototype, including the ability to 'see' how the AI read similarity between the query and returned images (>85%), and the ability to modify their search criteria based on this information (>80%). The ability to select areas within the query and returned images was seen as 'user-friendly' and 'useful'. Users found the process of searching with multiple images less intuitive, though most agreed it would be 'useful' (>30%) or 'interesting' (>55%). Several users noted that the selection feature on several images would create opportunities to discover new collections, and new types of collections, as well as to make connections across different cultures and time periods.
Moderated testing of the prototype revealed that users were interested in having an understanding of the scale of the image collections that they could search across, as well as the ability to aggregate, curate, and save their own 'collections' from different institutions in one place. Metadata and re-use permissions were key for all testers, as was the ability to navigate to the collection website from whence the images originated so as to gain more context and information. We found that testers in these sessions struggled to both understand and employ the XAI technology - all users found the heatmaps confounding though fascinating (what the AI finds similar is quite different to what our testers found similar between images). The ability to enter into a dialogue with the AI through modifying the returned images was also not picked up by the testers, though this may be attributed to the prototype design and novelty of the technology. Most of the testers intuited both the selection tool that allowed them to refine the search, as well as searching with several images simultaneously. The latter seemed natural to testers and allowed for creative deployment of the technology. Testers were interested in being able to refine by (1) negative filtering (e.g. 'show less like this result'); (2) mixed visual and semantic search, for example by additional filtering based on media type, time-period, location, collecting institution, type of collection; (3) semantically articulated well-defined visual facets like 'colour'.
Computer vision search was welcomed by all users as an exciting addition to their search and discovery tasks. Testers highlighted the potential of this technology for enabling search and discovery for users with certain disabilities like dyslexia, or for lowering access barriers for non-native English speakers and those users who lack technical language to describe their query. Users with prior experience of commercial computer vision search platforms highlighted that they (1) had more trust in the images and metadata returned in an institutionally-driven search platform, and (2) liked the lack of commercially-driven motivation in the returned results.
Exploitation Route The final project report outlines a number of recommendations for the use and deployment of computer vision search within the context of a national cross-collection search platform. Others can re-use the code for the backend and frontend computer vision search platform we developed. Further research can be based on our findings in how users interact with 'explainable AI' methods for computer vision search, and whether a dialogue between the user and the algorithm is a fruitful approach to creating a more task-specific search method. In addition, developments in computer vision search methods, where the algorithm is re-trained on the fly based on live feedback from the users' input, could be helpful in going around existing biases in image training sets for computer vision searching.
Sectors Creative Economy,Digital/Communication/Information Technologies (including Software),Leisure Activities, including Sports, Recreation and Tourism,Culture, Heritage, Museums and Collections

URL https://www.nationalcollection.org.uk/sites/default/files/2021-10/Deep%20Discoveries%20Final%20Report%20.pdf
 
Description Our project aimed to interact with general audiences through a public survey and unmoderated testing of the computer vision search prototype. This work was carried out through a number of webinars, and survey and testing invitations that were sent out through a number of mailing lists and publicised on social media. The work allowed members of the public who interacted with the prototype and the outputs to familiarise themselves with visual search (as many were not at all familiar with this technology), and to reflect on how if could be employed in their day to day lives and in their professional settings. The project also led to a significant idea development for the computer vision team on the project, who will continue to explore the possibility of developing a responsive computer vision algorithm, capable of being re-trained on the fly as users interact with the returned images from each search task.
First Year Of Impact 2021
Sector Digital/Communication/Information Technologies (including Software),Culture, Heritage, Museums and Collections
Impact Types Cultural

 
Title Survey Results - Search and Discovery of Visual Collections 
Description Results from an online survey that asked users of visual collections how they searched and discovered this content, as well as how they used it. The survey was especially interested in asking users about their experience with reverse image searching and computer vision search platforms. We had approximately 200 responses and the data is presented in the Appendix 2 of the Final Project report. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact The dataset informed decision-making around the creation of a computer vision search platform, created as a test case for the Deep Discoveries project. The platform aimed to allow users to have a visual dialogue with the AI, in order to articulate their visual search task more effectively. 
URL https://www.nationalcollection.org.uk/sites/default/files/2021-10/Deep%20Discoveries%20Final%20Repor...
 
Description Gainsborough 
Organisation Gainsborough Silk Weaving Company Limited
Country United Kingdom 
Sector Private 
PI Contribution Deep Discoveries is carrying out research into the application of computer vision (CV) search tools for enhancing the ability of general audiences and specialist researchers to discover visual collections in new and/or more effective ways. Our starting point for the project was to explore this concept through the lens of a designer or visual collections specialist working in/with design archives. Including smaller design archives and private organisations like the Gainsborough Silk Weaving Company in our project, from image dataset creation to project meetings, workshops and surveys, as well as user testing will have an impact on the organisation's strategy of making their collections more accessible, available, and searchable online. Our approach and research findings will help the Gainsborough Silk Weaving Company in enhancing customer access to, and engagement with, their collections online, extending their reach beyond our physical location.
Collaborator Contribution The Gainsborough Silk Weaving Company collections feature weaving designs many of which have a floral or botanical theme, which was the category our visual search algorithm narrowly focused on at the onset of the project. Inclusion of the Gainsborough Silk Weaving Company's images in Deep Discoveries enhanced the training set used for the search platform, aided in scoping issues around intellectual property and image-reuse, and aided in evaluating the current digitised image landscape. The involvement of smaller institutions such as the Gainsborough Silk Weaving Company helps the project and programme's aim of creating a national collection which includes data from a range of sources. The Gainsborough Silk Weaving Company contributed 760 images to the project, participated in the first workshop and plans to participate in user testing and the second workshop of the project. They have also attended our all-team project meetings.
Impact Engagement focused website, blog or social media channel - Website; A talk or presentation - Interim Presentation; A formal working group, expert panel or dialogue - Interview - Europeana AI in GLAM; A talk or presentation - Poster presentation; Engagement focused website, blog or social media channel - Blog post; Participation in an activity, workshop or similar - Survey; Participation in an activity, workshop or similar - Workshop 1; Image dataset for algorithm training (not published); Multi-disciplinary - Computer Vision, User Experience Research, Heritage organisations (Archive, Museum, Botanic Garden)
Start Year 2020
 
Description MoDA 
Organisation Middlesex University
Department Museum of Domestic Design and Architecture
Country United Kingdom 
Sector Private 
PI Contribution Deep Discoveries is carrying out research into the application of computer vision (CV) search tools for enhancing the ability of general audiences and specialist researchers to discover visual collections in new and/or more effective ways. Our starting point for the project was to explore this concept through the lens of a designer or visual collections specialist working in/with design archives. Including smaller design archives and organisations like MoDA in our project, from image dataset creation to project meetings, workshops and surveys, as well as user testing will have an impact on the organisation's strategy of making their collections more accessible, available, and searchable online. Our approach and research findings will help MoDA in enhancing public access to, and engagement with, their collections online, extending their reach beyond our physical location.
Collaborator Contribution MoDA's collections feature designs for wallpapers and textiles, many of which have a floral or botanical theme, which was the category our visual search algorithm narrowly focused on at the onset of the project. Inclusion of MoDA's images in Deep Discoveries both enhanced the training set used for the search platform and aided in evaluating the current digitised image landscape. The involvement of smaller institutions such as MoDA helps the project and programme's aim of creating a national collection which includes data from a range of sources. MoDA contributed 1170 images to the project and associated metadata (e.g. i) short description of image content (e.g. Design for a textile of red, blue and yellow flowers); ii) materials/technique used (e.g. Watercolour on detail paper) and iii) production date), participated in the first workshop and plan to participate in user testing and the second workshop of the project. They have also attended our all-team project meetings.
Impact Engagement focused website, blog or social media channel - Website; A talk or presentation - Interim Presentation; A formal working group, expert panel or dialogue - Interview - Europeana AI in GLAM; A talk or presentation - Poster presentation; Engagement focused website, blog or social media channel - Blog post; Participation in an activity, workshop or similar - Survey; Participation in an activity, workshop or similar - Workshop 1; Image dataset used for algorithm training (not published); Multi-disciplinary - Computer Vision, User Experience Research, Heritage organisations (Archive, Museum, Botanic Garden)
Start Year 2020
 
Description Northumbria University 
Organisation Northumbria University
Department Northumbria School of Design
Country United Kingdom 
Sector Academic/University 
PI Contribution We have formed a subcontracted partnership with two professors and a recent graduate of the School of Design at Northumbria University to complete user experience design and interface for the Deep Discoveries project computer vision search platform. The group will receive a direct financial contribution of up to £9999 for their work on the project, as well as participation in our project research methods, second workshop, scoping exercises, and any publications or other outputs associated with the partnership.
Collaborator Contribution Our partners are providing consultation and research around user experience design and user interface design for the computer vision search platform proposed within the project, creating user stories and epics, and wireframe designs based on our research and their own literature review and experience. They will also support the design delivery and testing, as well as second stage design or exploratory research following user testing if needed.
Impact A talk or presentation - Interim Presentation; Series of epics and user stories based on our research using the platform Miro (not public); Series of wireframe designs for a computer vision search platform using Marvelapp (not public); Multi-disciplinary - Design, Computer Vision, User Experience Research, Heritage organisation
Start Year 2021
 
Description Royal Botanic Gardens Edinburgh 
Organisation Royal Botanic Garden Edinburgh (RBGE)
Country United Kingdom 
Sector Charity/Non Profit 
PI Contribution Within our project, three heritage organisations (The National Archives, The Victoria and Albert Museum, and the Royal Botanic Gardens Edinburgh) are carrying out research with The University of Surrey into the application of computer vision (CV) search tools for enhancing the ability of general audiences and specialist researchers to discover visual collections in new and/or more effective ways. Our team is working with the Co-I from RGBE, who is contributing to the project's deliverable of a report scoping visual collection availability and readiness for ingestion into a visual search platform. Through their participation on the project, the RGBE can contribute to the direction of the research, and will gain relevant insight about making their collections more accessible, available, and searchable online.
Collaborator Contribution The RGBE co-I will produce a report on the state of digitised visual collections and access thereof in the context of creating a vision search platform. This research is hugely important for the final report of the project, in outlining next steps and pipelines for helping organisations in creating online visual collections that can be explored by visual similarity search. The RGBE also supplied 7435 images from various collections in a spreadsheet which has URL links to the images to download and associated metadata such as scientific names, photographer, collector name and number, and catalogue number and license links. The images are primarily botanical themed including leaf and flower sketches, photographs of leaves, mountains, bushes, flower/seed/fruit of plants.
Impact Engagement focused website, blog or social media channel - Website; A talk or presentation - Interim Presentation; A formal working group, expert panel or dialogue - Interview - Europeana AI in GLAM; A talk or presentation - Poster presentation Engagement focused website, blog or social media channel - Blog post; Participation in an activity, workshop or similar - Survey; Image set for algorithm training (not published); Multi-disciplinary - Computer Vision, User Experience Research, Heritage organisations (Archive, Museum, Botanic Garden), School of Design
Start Year 2020
 
Description The V&A Museum 
Organisation Victoria and Albert Museum
Department Research Department
Country United Kingdom 
Sector Charity/Non Profit 
PI Contribution Within our project, three heritage organisations (The National Archives, The Victoria and Albert Museum, and the Royal Botanic Gardens Edinburgh) are carrying out research with The University of Surrey into the application of computer vision (CV) search tools for enhancing the ability of general audiences and specialist researchers to discover visual collections in new and/or more effective ways. Our team is working with a digital lead from the V&A, who is contributing to the project's UX design and research. Through their participation on the project, the V&A can contribute to the direction of the research, and will gain relevant insight about making their collections more accessible, available, and searchable online.
Collaborator Contribution The V&A co-I and digital research lead are contributing research related to current user access to and engagement with visual search, and the design of a new visual similarity search platform. The digital research lead has carried out extensive research and planning with the UX researcher and the software developer from The National Archives, alongside facilitating our first workshop and survey and generating reports related to this activity. They are also involved in decision-making and planning with our design collaborators at Northumbria University. The V&A has also supplied 426 images from their collections. The images primarily are of floral patterns and designs on paper, though not all, featuring a botanical theme.
Impact Engagement focused website, blog or social media channel - Website; A talk or presentation - Interim Presentation; A formal working group, expert panel or dialogue - Interview - Europeana AI in GLAM; A talk or presentation - Poster presentation; Engagement focused website, blog or social media channel - Blog post; Participation in an activity, workshop or similar - Survey; Participation in an activity, workshop or similar - Workshop 1; Image dataset for algorithm training (not published); UX research literature report (not published); UX problem statements (not published); Prototype ideas for visual search platform (not published); Multi-disciplinary - Computer Vision, User Experience Research, Heritage organisations (Archive, Museum, Botanic Garden), School of Design
Start Year 2020
 
Description University of Surrey 
Organisation University of Surrey
Department Centre for Vision, Speech and Signal Processing
Country United Kingdom 
Sector Academic/University 
PI Contribution Within our project, three heritage organisations (The National Archives, The Victoria and Albert Museum, and the Royal Botanic Gardens Edinburgh) are carrying out research with The University of Surrey into the application of computer vision (CV) search tools for enhancing the ability of general audiences and specialist researchers to discover visual collections in new and/or more effective ways. The heritage partners are contributing images, digitised image expertise, humanities based lens to the application of AI and computational research for heritage, and user experience and design research to create a search platform based on the computer vision search algorithms designed by the University of Surrey.
Collaborator Contribution The University of Surrey partners are experts in visual search and visual similarity search methods using deep neural networks. They are using the images contributed by our network, along with their visual and semantic-based algorithms to develop a novel way of searching across multi-media visual collections. They are also developing an annotation method and platform for the algorithm training.
Impact Engagement focused website, blog or social media channel - Website; A talk or presentation - Interim Presentation; A formal working group, expert panel or dialogue - Interview - Europeana AI in GLAM; A talk or presentation - Poster presentation; Engagement focused website, blog or social media channel - Blog post; Participation in an activity, workshop or similar - Survey; Participation in an activity, workshop or similar - Workshop 1; Image dataset for algorithm training (not published); Computer vision algorithms (not published); Image annotation platform (not public); explainable AI algorithms (not public); Multi-disciplinary - Computer Vision, User Experience Research, Heritage organisations (Archive, Museum, Botanic Garden), School of Design
Start Year 2020
 
Title Computer Vision Search Web Platform 
Description On this website, users can test the computer vision search platform developed during the Deep Discoveries project. The web app allows the user to either upload or use one of the pre-selected starter images to search a collection of mostly botanically themed images collated from partner organisations. The user can then see what aspects of the search images were noted as similar in the results images by the algorithm, and modify their search by selecting areas of interest on the results images. The backend and front end code for the app are available on github: https://github.com/tanc-ahrc/deep-discoveries-backend/ https://github.com/tanc-ahrc/deep-discoveries-frontend/ 
Type Of Technology Webtool/Application 
Year Produced 2021 
Impact The demo web app was used throughout the final stages of the project during moderated user testing and engagement and dissemination activities. The app allowed the team to communicate the outputs of the research, receive feedback from professional and public audiences, and provide recommendations in the final report. The information gathered from user testing using the web app was useful for future research direction taken ahead by the computer vision team in the project. 
URL https://tanc-ahrc.github.io/deep-discoveries-frontend/
 
Description AURA presentation 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The project software engineer with support from the PI presented a talk (Towards Computer Vision Search and Discovery of our National Collection: Challenges and Prospects in Accessing Image Collections) and participated in a break-out room discussion at the Archives in the UK/Republic of Ireland and AI (AURA) Network's second workshop on 'AI and Archives: Current Challenges and Prospects of Digital and Born-digital archives'. The presentation sparked valuable discussion with a network of diverse professionals around the challenges of presenting digital visual archival collections.
Year(s) Of Engagement Activity 2021
URL https://www.aura-network.net/events/ai-and-archives-current-challenges-and-prospects-of-born-digital...
 
Description Blog post 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact A blog about the project hosted on The National Archives UK website, which led to requests for more information about the project from students and media.
Year(s) Of Engagement Activity 2020
URL https://blog.nationalarchives.gov.uk/deep-discoveries-exploring-a-new-way-of-discovering-and-connect...
 
Description Closing Webinar 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact A webinar detailing the research methods and outcomes of the Deep Discoveries project. Four talks were presented by each strand in the project: the PI, our team of Computer Vision experts, UX and UI researchers, and Interaction Design specialists who developed a visual similarity search platform to test the potential of this technology for creating new avenues for audiences to discover and research cultural heritage collections online. There was a lively discussion during the presentation and a long Q&A that followed, with great ideas exchanged and new connections made.
Year(s) Of Engagement Activity 2021
URL https://www.youtube.com/playlist?list=PLRIxrpy54RHZbkN4GVqAaLelZh84J524b
 
Description Computer Vision and Heritage: Opportunities for Research and Engagement 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact A set of talks were organised to explore opportunities for research and engagement afforded by computer vision when applied to heritage collections. Three talks were presented by researchers carrying out this kind of work followed by a lively discussion that created the opportunity for new collaborations and idea development.
Year(s) Of Engagement Activity 2021
URL https://www.youtube.com/playlist?list=PLRIxrpy54RHY3HduZtjUHitsIGl8IljFb
 
Description Discussion panel: Masterpiece International Art Fair Symposium 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact The Masterpiece International Art Fair themed its symposium on Museums, Research and Discovery. Its panel on Modes of Discovery focussed sharing data between institutions and with the public can lead to types of discovery that might not otherwise be possible. My contribution to the discussion explored collaboration between collections, provenance, public participation in research, how technologies such as machine learning, computer vision and crowdsourcing platforms can generate new ways of understanding and interacting with collections, and how community-generated digital content can be linked to established collections. 70 people attended the online panel and break-out sessions afterwards. The organisers reported a high level of engagement.
Year(s) Of Engagement Activity 2022
URL https://us06web.zoom.us/meeting/register/tZUvduGhrD8iGNONAcct528BGY8dMI9ejcDO
 
Description HUMAN VISION/COMPUTER VISION: MAKING SENSE OF ART 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact A day conference aimed at bridging the conversation between art historians and computational scientists around computer vision and art historical research. The two day event included a day for graduates and post-graduates and a day of talks by experts in the field.
Year(s) Of Engagement Activity 2022
URL https://www.lboro.ac.uk/subjects/communication-media/news-events/events/2022/human-vision-computer-v...
 
Description Interim Presentation 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact A talk as a part of The Towards a National Collection presentation of interim findings from its Foundation Projects. The PI presented a half-hour summary of the project progress, challenges, and findings, and engaged in a fruitful discussion with attendees about visual similarity search, user's needs and intentions between discovery driven and motivated searches. Attendees noted that the presentation was very thorough and clear and gave them a full picture of the project aims and progress.
Year(s) Of Engagement Activity 2021
URL https://www.eventbrite.co.uk/e/foundation-projects-heritage-connector-deep-discoveries-tickets-13858...
 
Description Interview 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Postgraduate students
Results and Impact Our team's software developer and UX researcher was interview by an Aberystwyth University Archive Administration MA student about different ways archival institutions are using emerging technologies. The information was used by the student for their dissertation.
Year(s) Of Engagement Activity 2020
 
Description Interview - Europeana AI in GLAM 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The PI of the project, with input from Co-I and PDRA at the University of Surrey, was interviewed by a member of Europeana's AI in GLAMs special interest group for their interim report; the interview was transcribed and used in a high-level report on the current AI in GLAMs landscape: an interim report, also based on a survey that the Deep Discoveries PI also contributed to.
Year(s) Of Engagement Activity 2020
URL https://pro.europeana.eu/files/Europeana_Professional/Europeana_Network/Europeana_Network_Task_Force...
 
Description Panel discussion - The Interface(s) of a Virtual National Collection 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact A presentation and panel discussion by the Deep Discoveries team alongside two other Towards a National Collection Foundation projects, during the DARIAH Annual Event 2021, focused on Interfaces. The talk was a reflection on the how the creation of the interface for the visual search prototype allowed for a multi-disciplinary team to work together despite very different approaches to research. We argued that the prototype acted as a boundary object in the context of the project.
Year(s) Of Engagement Activity 2021
URL https://dariah-2021.sciencesconf.org/354441
 
Description Poster presentation 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact A poster presentation about the project presented virtually after The National Archives' 2020 Annual Digital Lecture 'The death of anonymity in the age of identity'. There was a Twitter engagement session with poster presenters though no questions were directed toward our poster.
Year(s) Of Engagement Activity 2020
URL https://www.youtube.com/watch?v=76ndyp_IWQw
 
Description Presentation to the Alan Turing Computer Vision Special Interest Group 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact A presentation of the computer vision aims and research methods developed by the Deep Discoveries project to an Alan Turing Institute special interest group studying the applications of computer vision in the humanities. The format of the talk allowed for a 30-45 min discussion and for participants to test the computer vision search platform developed by the project. This created excellent opportunities to develop more ideas and visualise how this method could be applied to other visual collections.
Year(s) Of Engagement Activity 2022
URL https://www.turing.ac.uk/research/interest-groups/computer-vision-digital-heritage
 
Description Survey 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The anonymous online survey is used to collect information about the search and discovery behaviours of users of online visual collections, especially heritage collections, and their engagement with existing visual search platforms. The survey has been hugely useful in demonstrating how creatives and the general public uses visual search, what they find useful and not so useful, and will bear a large impact on our project's search algorithm design, user interface design, and final report.
Year(s) Of Engagement Activity 2021
URL https://www.smartsurvey.co.uk/s/DeepDiscoveries/
 
Description Unmoderated Testing of Computer Vision Search platform 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact A website was set-up that presented a static version of the computer vision search platform developed by the project, in order to gain understanding around the usability of the system. The unmoderated testing was conducted over the course of one month and the results helped to inform the final recommendations presented in the project report.
Year(s) Of Engagement Activity 2021
URL https://app.maze.co/report/ks3fcjpdkqc3mqyh#intro
 
Description Walkthrough of the Deep Discoveries live Visual Search prototype 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact A video created to demonstrate the process of designing a visual search prototype as well as a walkthrough of the final prototype design. The video allowed for a visual and easy to grasp explanation of the research process, and the collaboration between different teams in the project (computer scientists, interaction design, user research)
Year(s) Of Engagement Activity 2021
URL https://www.youtube.com/watch?v=gEuU_zf223g
 
Description Website 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact We have created a website for the project on the GitHub website, where we have described the aims of the project, the timelines, and add upcoming events or published blogs. The website is a useful portal for people interested in learning more about our project.
Year(s) Of Engagement Activity 2020
URL https://tanc-ahrc.github.io/DeepDiscoveries/
 
Description Workshop 1 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact The purpose of the workshop was to understand current and potential users' goals and behaviours when working with Visual Collections, and to have a discussion with collection experts working in design archives or similar as a starting point. We had 12 attendees from several organisations who presented short intros to their collections and then discussed the following questions in breakout groups.
'Who uses graphic collections and who is excluded?
What are the real and perceived barriers to user access and discovery of these collections?
How would users benefit from visual search of a 'National Collection', what are the barriers/limitations?
How might Visual Search help users overcome existing barriers and help reach new audiences?
The discussion and ideas were recorded using FunRetro and all materials were then shared with participants and project partners (with permission from attendees). The workshop input was used to create a follow-up survey in the project as well as to inform user experience research around search platform 'problem statements' and design.
Year(s) Of Engagement Activity 2020