Acquiring Complete and Editable Outdoor Models from Video and Images

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

Imagine being able to take a camera out of doors and use it to capture 3D models of the world around you. The landscape at large, including valleys and hills replete with trees, rivers, waterfalls, fields of grass, clouds; seasides with waves rolling onto shore here and crashing onto rocks over there; urban environments complete with incidentals such as lamposts, balconies, and the detritus of modern life. Imagine models that look and move like the real thing. Models that you can use with to make up new scenes of your own, which you can control as you please, and render in how you like. You can zoom into to see details, and out to get a wide impression.

This is an impressive vision, and one that is well beyond current know-how. Our plan is to take a major step towards meeting this vision. We will enable users to use video and images to capture large scale scenes of selected types and populate them with models trees, fountains, street furniture and such like, again carefully selecting the types of objects. We will provide software that recognises the sort of environment the camera is in, and objects in that environment, so that 3D moving models can be automatically created.

This will prove very useful to our intended user group, which is the creative industries in the UK: films, games, broadcast. Modelling outdoor scenes is expensive and time consuming, and the industry recognises that video and images are excellent sources for making models they can use. To help them further we will develop software that makes use of their current practice of acquiring survey shots of scenes, so that all data is used at many levels of detail. Finally we will wrap all of our developments into a single system that shows the acquisition, editing and control of complete outdoor environments is one step closer.

Publications

10 25 50
publication icon
Aittala M (2015) Two-shot SVBRDF capture for stationary materials in ACM Transactions on Graphics

publication icon
Bradbury G (2015) Guided Ecological Simulation for Artistic Editing of Plant Distributions in Natural Scenes in Journal of Computer Graphics Techniques (JCGT)

publication icon
Elek O (2017) Scattering-aware texture reproduction for 3D printing in ACM Transactions on Graphics

publication icon
Haines T (2016) My Text in Your Handwriting in ACM Transactions on Graphics

publication icon
Haines TSF My Text in Your Handwriting in Transactions on Graphics

publication icon
Hedman P (2018) Deep blending for free-viewpoint image-based rendering in ACM Transactions on Graphics

publication icon
Hedman P (2017) Sequential Monte Carlo Instant Radiosity. in IEEE transactions on visualization and computer graphics

 
Description 1. We discovered general principles to create naturally looking output that mimics a given style. We demonstrate this in one publication on generation of landscapes with realistically looking vegetation (which is very close to the original proposal), but also in more general terms, as evidenced by a recently accepted publication on synthesis of different handwriting styles.

2. We discovered a novel approach to capture complex reflectance properties of a natural surface material in a practical setting -- with nothing more than a hand-held camera phone. This is a critical ingredient of OAK's overarching goal of practical capture of natural phenomena.

3. Similarly, we contributed improved methods for practical, hand-held geometry capture using inexpensive, off-the-sheld depth cameras.

4. We further developed a fundamentally new geometry capture approach that works for highly specular surfaces, a regimen where traditional scanning approaches fail, by extracting shape directly from observed surface reflections.

5. We improved our methodology for combining image analysis and machine learning, and evaluated them in an applied context, by training a system to accurately count drosophila eggs in microscope images.
Exploitation Route Many of the base technologies we developed and published could be used by others (subject to licensing requirements - with one method we are currently pursuing a patent application.)

We hope that, in the upcoming year, these technologies will further combine to create overarching content creation solutions that are of more immediate value to our industrial partners from the special-effects and games industry.

In addition, we started putting these technologies to a use in cultural-heritage digitisation applications.
Sectors Construction,Creative Economy,Digital/Communication/Information Technologies (including Software),Education,Leisure Activities, including Sports, Recreation and Tourism,Culture, Heritage, Museums and Collections

 
Description (1) Part of the expertise developed via the research grant went into an Innovate UK project with the company Change of Paradigm, LLC (CoP), in 2016/17: within that project, I acted as a consultant on behalf of the UCL Media Institute, and we jointly developed a novel hardware and algorithms to automatically capture visual appearance of textiles. CoP are currently developing a business that builds upon that capability. (2) Another Innovate UK project that leverages expertise developed through this grant is DELIGHTA (project no. 103372), 2017/18, where we jointly developed advanced capture and analysis capabilities for web-cam-based facial user authentication for banking, remote examinations, and other applications (again through the UCL Media Institute). (3) In addition, there are multiple patent applications pending for IP generated within this grant, while we actively pursue monetisation options.
First Year Of Impact 2016
Sector Education,Financial Services, and Management Consultancy
Impact Types Cultural,Societal,Economic

 
Description EPSRC Impact Acceleration Award
Amount £30,000 (GBP)
Funding ID M.2.35 
Organisation University College London 
Department Innovation and Enterprise
Sector Academic/University
Country United Kingdom
Start 11/2016 
End 03/2017
 
Description H2020 Innovative Training Network DISTRO
Amount € 3,333,301 (EUR)
Funding ID 642841 
Organisation Marie Sklodowska-Curie Actions 
Sector Charity/Non Profit
Country Global
Start 01/2015 
End 12/2018
 
Description Impact Studentship, Change of Paradigm
Amount £42,000 (GBP)
Organisation Changing Paradigms, LLC 
Sector Private
Country United States
Start 09/2015 
End 09/2018
 
Title My Text in Your Handwriting 
Description This dataset has our code and annotated samples of people's handwriting, allowing new users to process these inputs and generate newly authored text in these writers' handwriting. 
Type Of Material Database/Collection of data 
Year Produced 2016 
Provided To Others? Yes  
Impact Just posted online last week. 
URL http://visual.cs.ucl.ac.uk/pubs/handwriting/
 
Title Specular Surfaces 3DV 2015 datasets 
Description This dataset contains photos and calibration for many camera views of each specular object. It also contains mask images, ground-truth 3D laser-scans, and environment photographs. 
Type Of Material Database/Collection of data 
Year Produced 2015 
Provided To Others? Yes  
Impact N/A 
URL http://visual.cs.ucl.ac.uk/pubs/shapefromreflections/
 
Description Appearance Capture with Aalto and NVIDIA 
Organisation Aalto University
Department Department of Computer Science
Country Finland 
Sector Academic/University 
PI Contribution In joint research on practical appearance capture, I contributed with my general background in the field: within our team, I was the senior domain expert on appearance capture. Through weekly project meetings with Prof. Lehtinen (Aalto University, NVIDIA Research) and his graduate student Miika Aittala, we jointly pushed forward Mr. Aittala's research agenda, leading to publications at the top venue in our field. Later in the collaboration, there were additional synergies with my EPSRC-funded research on landscape appearance acquisition.
Collaborator Contribution Prof. Lehtinen, who has ample experience in industrial computer graphics research, specialising in rendering and natural light and reflectance phenomena, contributed both through his significant technical and scientific background, but also through the direct supervision of his graduate student Miika Aittala, who himself worked on material appearance capture for computer graphics applications.
Impact The primary outputs were two publications at ACM SIGGRAPH, which already are considered milestone contributions by many: Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. Practical SVBRDF capture in the frequency domain. ACM Trans. on Graphics (Proc. SIGGRAPH), 32(4):110:1-110:12, 2013. Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. Two-shot SVBRDF capture for stationary materials. ACM Trans. Graph. (Proc. SIGGRAPH), 34(4):110:1-110:13, July 2015. In addition, our team is currently preparing a patent application (preliminary application already filed) on the two-shot SVBRDF capture approach. We anticipate economic impact soon.
Start Year 2010
 
Description Appearance Capture with Aalto and NVIDIA 
Organisation NVIDIA
Country Global 
Sector Private 
PI Contribution In joint research on practical appearance capture, I contributed with my general background in the field: within our team, I was the senior domain expert on appearance capture. Through weekly project meetings with Prof. Lehtinen (Aalto University, NVIDIA Research) and his graduate student Miika Aittala, we jointly pushed forward Mr. Aittala's research agenda, leading to publications at the top venue in our field. Later in the collaboration, there were additional synergies with my EPSRC-funded research on landscape appearance acquisition.
Collaborator Contribution Prof. Lehtinen, who has ample experience in industrial computer graphics research, specialising in rendering and natural light and reflectance phenomena, contributed both through his significant technical and scientific background, but also through the direct supervision of his graduate student Miika Aittala, who himself worked on material appearance capture for computer graphics applications.
Impact The primary outputs were two publications at ACM SIGGRAPH, which already are considered milestone contributions by many: Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. Practical SVBRDF capture in the frequency domain. ACM Trans. on Graphics (Proc. SIGGRAPH), 32(4):110:1-110:12, 2013. Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. Two-shot SVBRDF capture for stationary materials. ACM Trans. Graph. (Proc. SIGGRAPH), 34(4):110:1-110:13, July 2015. In addition, our team is currently preparing a patent application (preliminary application already filed) on the two-shot SVBRDF capture approach. We anticipate economic impact soon.
Start Year 2010
 
Description Collaboration with UCL Institute of Healthy Ageing 
Organisation University College London
Department Institute of Healthy Ageing
Country United Kingdom 
Sector Academic/University 
PI Contribution The collaboration with Dr Matt Piper led to a proposal that was granted by the Crucible Centre, which funded a 4-year PhD studentship. The student works in both our labs (UCL Computer Science and UCL Institute of Healthy Ageing), learning and innovating on Machine Vision and lifespan-analysis of fruit flies. This collaboration has let to several important outcomes, published and (for now) unpublished, where computer vision is applied to real and massive visual datasets. For example, we have analyzed the longest-known video, recorded by our student, which filmed the whole life a fruit-fly of 3 months.
Collaborator Contribution The biology-focused partners handled all the training of the student to handle and breed flies, all lab-specific skills, and provided the users for user-testing of algorithms developed in the project.
Impact One paper has been published in PlosOne, and the other is in preparation for submission to Nature Methods.
Start Year 2012
 
Description Consulting for Change of Paradigm 
Organisation Change of Paradigm
Sector Private 
PI Contribution Consulting on creation of a hardware system, and software development, with the aim of capturing visual appearance parameters of textile samples.
Collaborator Contribution After I consulted for the company, they agreed to (partially) sponsor a PhD studentship under my supervision.
Impact Gilles Rainer, Wenzel Jakob, Abhijeet Ghosh, and Tim Weyrich. Neural BTF compression and interpolation. Computer Graphics Forum (Proc. Eurographics), 38(2):1-10, May 2019. Gilles Rainer, Kevin Rathbone, and Tim Weyrich. High-resolution BTF capture for delicate materials. In Conference on Visual Media Production (CVMP) Posters, December 2019. Commercialisation of the technology still pending.
Start Year 2015
 
Description Consulting for iProov 
Organisation iProov Ltd.
Country United Kingdom 
Sector Private 
PI Contribution Consulting on reflectance characterisation of human skin.
Collaborator Contribution N/A
Impact Successful conclusion of the Innovate UK projects DELIGHTA and DELAMBDA. Outputs are confidential, though.
Start Year 2017
 
Description Depth-Camera-Based Scene Reconstruction with University of Siegen and MS Research 
Organisation University of Siegen
Country Germany 
Sector Academic/University 
PI Contribution My key contribution was as domain expert in the area of point-based computer graphics, with secondary contributions through my general experience in computational photography and 3D reconstruction. No other team member from my group at UCL contributed. The (still ongoing) collaboration has great synergies with my EPSRC-funded work on 3D reconstruction of landscapes.
Collaborator Contribution University of Siegen contributed through the work by the graduate student Damien Lefloch, under the supervision of Prof. Andreas Kolb, and more recently through additional support by their graduate students Hamed Sarbolandi and Makus Kluge.
Impact So far, one successful publication, and one journal submission currently undergoing minor revisions: [accepted] Damien Lefloch, Tim Weyrich, and Andreas Kolb. Anisotropic point-based fusion. In Proceedings of International Conference on Information Fusion (FUSION), pages 1-9. ISIF, July 2015. [undergoing minor revisions] Damien Lefloch, Hamed Sarbolandi, Tim Weyrich, Andreas Kolb. Comprehensive Use of Curvature For Robust And Accurate Online Surface Reconstruction. Conditionally accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).
Start Year 2013
 
Description OAK Academic Partners at Bath 
Organisation University of Bath
Department Department of Computer Science
Country United Kingdom 
Sector Academic/University 
PI Contribution We collaborate on research, each way. This includes UCL staff visiting Bath.
Collaborator Contribution We collaborate on research, each way. This includes Bath staff visiting UCL.
Impact We have held several meetings, including with the IAP. We have identified further leads, such as a collaboration with the British Library.
Start Year 2013
 
Description OAK Industrial Advisory Panel (BBC, Gobo, DNeg, Foundry) 
Organisation British Broadcasting Corporation (BBC)
Department BBC Research & Development
Country United Kingdom 
Sector Public 
PI Contribution We provide core research capability and academic impact.
Collaborator Contribution Partners provide problem directions, and avenue for industrial impact.
Impact We are actively engaged with partners to transfer our research IP into their hands. Both UCL's PI and Co-I have practical experience with such transfer, through UCL Business and UCL Media Institute.
Start Year 2013
 
Description OAK Industrial Advisory Panel (BBC, Gobo, DNeg, Foundry) 
Organisation Double Negative
Country United Kingdom 
Sector Private 
PI Contribution We provide core research capability and academic impact.
Collaborator Contribution Partners provide problem directions, and avenue for industrial impact.
Impact We are actively engaged with partners to transfer our research IP into their hands. Both UCL's PI and Co-I have practical experience with such transfer, through UCL Business and UCL Media Institute.
Start Year 2013
 
Description OAK Industrial Advisory Panel (BBC, Gobo, DNeg, Foundry) 
Organisation Gobo Games
Country United States 
Sector Private 
PI Contribution We provide core research capability and academic impact.
Collaborator Contribution Partners provide problem directions, and avenue for industrial impact.
Impact We are actively engaged with partners to transfer our research IP into their hands. Both UCL's PI and Co-I have practical experience with such transfer, through UCL Business and UCL Media Institute.
Start Year 2013
 
Description OAK Industrial Advisory Panel (BBC, Gobo, DNeg, Foundry) 
Organisation The Foundry Visionmongers Ltd
Country United Kingdom 
Sector Private 
PI Contribution We provide core research capability and academic impact.
Collaborator Contribution Partners provide problem directions, and avenue for industrial impact.
Impact We are actively engaged with partners to transfer our research IP into their hands. Both UCL's PI and Co-I have practical experience with such transfer, through UCL Business and UCL Media Institute.
Start Year 2013
 
Title Two-Shot SVBRDF Capture Software 
Description Research prototype software. 
IP Reference  
Protection Copyrighted (e.g. software)
Year Protection Granted 2015
Licensed Yes
Impact NN
 
Title VIRTUAL SCENE GENERATION BASED ON IMAGERY 
Description BACKGROUND A virtual world is a simulated environment in which users may interact with virtual objects and locations of the virtual world. Each user may control a respective avatar through which the user may interact with other users' avatars in the virtual world. An avatar generally provides a graphical representation of an individual within the virtual world environment. Avatars are usually presented to other users as two or three-dimensional graphical representations that resembles a human individual. Frequently, virtual worlds allow multiple users to enter the virtual environment and interact with one another. Virtual worlds are said to provide an immersive environment, as they typically appear similar to the real world and objects tend to follow rules related to gravity, topography, locomotion, physics and kinematics. Of course, virtual worlds can suspend or alter these rules as well as provide other imaginative or fanciful environments. Users typically communicate with one another through their avatars using text messages sent between avatars, real-time voice communication, gestures displayed by avatars, symbols visible in the virtual world, and the like. Some virtual worlds are described as being persistent. A persistent world provides an immersive environment (e.g., a fantasy setting used as a setting for a role-playing game, or a virtual world complete with land, buildings, towns, and economies) that is generally always available and where events continue to occur regardless of the presence of a given avatar. Thus, unlike more conventional online games or multi-user environments, the virtual world continues to exist and plots and events continue to unfold as users enter (and exit) the virtual world. Virtual environments are presented as images on a display screen and some virtual environment may allow users to record events that occur within the virtual environment. SUMMARY Embodiments presented in this disclosure provide a computer-implemented method of scene generation. The method includes receiving an image depicting a scene and annotated by a sparse set of labels. The method also includes generating, based on the sparse set of labels, a dense set of labels annotating the image and a density map associated with the image. The method also includes generating a virtual scene based on the dense set of labels and the density map, where the virtual scene is output. Other embodiments presented in this disclosure provide a computer-readable medium for scene generation and containing a program which, when executed, performs an operation that includes receiving an image depicting a scene and annotated by a sparse set of labels. The operation also includes generating, based on the sparse set of labels, a dense set of labels annotating the image and a density map associated with the image. The operation also includes generating a virtual scene based on the dense set of labels and the density map, where the virtual scene is output. Still other embodiments presented in this disclosure provide a system for scene generation. The system includes one or more computer processors and a memory containing a program which, when executed by the one or more computer processors, is configured to perform an operation that includes receiving an image depicting a scene and annotated by a sparse set of labels. The operation also includes generating, based on the sparse set of labels, a dense set of labels annotating the image and a density map associated with the image. The operation also includes generating a virtual scene based on the dense set of labels and the density map, where the virtual scene is output. 
IP Reference US9262853 
Protection Patent granted
Year Protection Granted 2016
Licensed No
Impact None so far.
 
Title Code and framework for Roto++: Accelerating Professional Rotoscoping using Shape Manifolds 
Description This software implements our rotoscoping system, and makes it easy for uses to add their own shape-analysis and user-interaction functions for professional-grade masking of video clips. The software is a working implementation of our SIGGRAPH 2016 paper, "Roto++: Accelerating Professional Rotoscoping using Shape Manifolds". 
Type Of Technology Software 
Year Produced 2016 
Impact The software has been downloaded almost 2000 times, and the associated video has received over 2000 hits on Youtube. The partner company, The Foundry, is continuing the collaboration, and will be developing a product branch based on this technology. 
URL http://visual.cs.ucl.ac.uk/pubs/rotopp/
 
Title Code for HarmonNetworks 
Description Code implementing our HarmonicNetworks, which gives a neural network the ability to see rotated images as if they were not rotated, or to measure the amount of rotation present. 
Type Of Technology Software 
Year Produced 2017 
Open Source License? Yes  
Impact Resulted in a publication in the highly regarded conference IEEE CVPR (top ranked publication in Computer Science, according to Google Scholar): Harmonic Networks: Deep Translation and Rotation Equivariance, led by Daniel Worrall 
URL https://github.com/deworrall92/harmonicConvolutions
 
Title Code for Help, It Looks Confusing: GUI Task Automation Through Demonstration and Follow-up Questions 
Description Code implementing our IUI 2017 paper (received Honorable Mention award), "Help, It Looks Confusing: GUI Task Automation Through Demonstration and Follow-up Questions", led by Thanapong Intharah. 
Type Of Technology Software 
Year Produced 2017 
Open Source License? Yes  
Impact Resulted in a full paper published at IUI 2017, which received an award, and was a featured demo at the conference: Help, It Looks Confusing: GUI Task Automation Through Demonstration and Follow-up Questions. 
URL http://visual.cs.ucl.ac.uk/pubs/HILC/
 
Title Code for Responsive Action-based Video Synthesis 
Description This software implements the prototype described in our CHI 2017 paper. It allows casual users to injest a video, and turn various elements in the video into loopable clips, that can be triggered by keyboard or through other interfaces. 
Type Of Technology Software 
Year Produced 2017 
Impact The software led to our CHI 2017 paper, led by Corneliu Ilisescu, "Responsive Action-based Video Synthesis". 
URL http://visual.cs.ucl.ac.uk/pubs/actionVideo/
 
Title Code for Unsupervised Monocular Depth Estimation with Left-Right Consistency 
Description This software and and the associated scripts allow reproduction of our new system, that converts a color image into a depth-image. The software also makes it easy to re-train the statistical model, using other binocular stereo pairs of images. All this software underpins our CVPR 2017 paper. 
Type Of Technology Software 
Year Produced 2017 
Impact The algorithms behind this software have now been filed in a patent application, and we are seeking commercialization opportunities. 
URL http://visual.cs.ucl.ac.uk/pubs/monoDepth/
 
Title Handwriting Annotation and Synthesis Software 
Description The software has two parts: 1) Interactive software that helps a user annotate a scanned image of handwritten text, extracting and recognizing the characters (or glyphs) to make them re-usable by the synthesis algorithm. 2) Synthesis software that takes a learned model of a specific user's handwriting, and also expects some typed text that the user wishes to render in the handwritten style. The system produces images that can be printed on paper and look convincingly like handwriting. 
Type Of Technology Software 
Year Produced 2016 
Impact N/A: This software has just been released online, and the public announcement is pending. 
URL http://visual.cs.ucl.ac.uk/pubs/handwriting/
 
Title Mean Shift Library; 02/2016 
Description Dr Tom SF Haines developed this Mean Shift library. Mean shift is usually associated, in computer vision at least, with the segmentation of an image. Whilst this library supports that scenario, it is far more general. Mean shift is a gradient ascent method for finding the modes of a kernel density estimate, so this library is as much a kernel density estimation library as it is a mode finder. It includes the usual kernel bandwidth estimation methods and also supports subspace constrained mean shift, which finds edges/manifolds in noisy data. Support goes far beyond the typical Gaussian and Uniform kernels: It has ten kernel types, as well as the ability to combine them, with different kernels on different parts of a feature vector. The kernels include directional distributions, so it supports density estimation over the position and orientation of an object, for instance. It also supports the multiplication of density estimates, which allows you to perform non-parametric belief propagation using mean shift objects as the messages between random variables. Also has methods to approximate values such as the entropy of a density estimate, and the KL divergence between two estimates. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact At this point, we have no reliable information on the user basis of this library. 
URL http://reality.cs.ucl.ac.uk/projects/haines/haines16meanshift.html
 
Title Quantifly software 
Description This software allows a user to 1st train a statistical model of appearance, and 2nd to count the number of instances of that object. It was designed and most tested for counting of hundreds of fly-eggs in microscope images. 
Type Of Technology Software 
Year Produced 2015 
Impact The UCL Genetics lab on Healthy Aging is using this software on a daily basis to measure the number of fly eggs in their vials. The number of eggs is an indicator of fly health. 
URL http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127659
 
Title Software and data for Scalable Inside-Out Image-Based Rendering 
Description Software and data associated with our Siggraph Asia 2016 paper "Scalable Inside-Out Image-Based Rendering", led by Peter Hedman. 
Type Of Technology Software 
Year Produced 2016 
Impact The software and algorithms will likely be folded into a startup, based at INRIA Sophia Anitpolis, with collaboration by the different academic and industry partners. 
URL http://visual.cs.ucl.ac.uk/pubs/insideout/
 
Description 3Dami 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact 45 students formed teams of 9 to make short 3D animated films in two seven day workshops, operating at UCL and University of Bath; over 200 students applied to attend and the premières had large audiences. Questionnaires indicated that they learnt more than a similar period in a classroom, got to do an activity far outside those normally made available by schools (teamwork to create a large, technically complex, project) and were interested in doing more. After the event three separate teams have formed and made further films over the internet, of their own volition, which they never would have done before.
Year(s) Of Engagement Activity 2015
URL http://3dami.org
 
Description BBC Interview for Handwriting Synthesis project 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Rory Cellan-Jones, BBC Technology correspondent came and interviewed me and postdoc Dr Tom Haines, to showcase our research. The project allows us to scan someone's handwriting sample, and then to create new text in that person's handwriting. The report and BBC video from this interview were featured as the main Technology story on the BBC website. Our youtube video has been viewed over 52,000 times.
Year(s) Of Engagement Activity 2016
URL http://www.bbc.co.uk/news/technology-37046477
 
Description Invited Speaker HiVisComp, 4/2/2016 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I spoke at HiVisComp, the annual national gathering of the Czech visual computing community, on Computational Analysis in Cultural Heritage Applications. In addition to the Czech researchers, there were selected international attendees. Overall, I was impressed by the quality of the programme and learnt that Czech has a very healthy computer graphics and computer vision research community; I now have plans to further engage with them.
Year(s) Of Engagement Activity 2016
 
Description Invited Talk at Imperial College, 28/11/2014 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact I presented on Computational Analysis in Cultural Heritage Applications before graduate students and professors of Imperial College, who took great interest in my vision of how to combine research on 3D photography with impact in the cultural-heritage domain. The presentation was followed by in-depth discussions with Dr Abhijeet Ghosh, who afterwards requested (and successfully received) an EPSRC fellowship application, in which he plans to collaborate with me on appearance acquisition in the heritage sector.
Year(s) Of Engagement Activity 2014
 
Description Invited Talk at University of Bonn, 8/12/2014 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I presented no Computational Analysis in Cultural Heritage Applications at the seminar of the computer graphics group at Bonn University. That group is an international leader in the field of 3D photography and appearance acquisition. My presentation on how to apply bespoke 3D photography solutions to the heritage domain was very well received; the audience showed great interest in extending their own methods to heritage problems.
Year(s) Of Engagement Activity 2014
 
Description Invited Talk at University of Siegen, 9/12/2014 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I presented on Computational Analysis in Cultural Heritage Applications at the seminar of the computer graphics group at Siegen University. That group is an international leader in the field of 3D sensing, particularly time-of-flight photography. My presentation on how to apply bespoke 3D photography solutions to the heritage domain was very well received; the audience showed great interest in extending their own methods to heritage problems.
Year(s) Of Engagement Activity 2014
 
Description Invited talk at Friedrich-Alexander University Erlangen-Nürnberg, 14/4/2016 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Invited talk at post-graduate seminar of that university. The talk was on "Problem-Aware Digitisation of Cultural Heritage Artefacts" and showcased a number of EPSRC-funded innovations in digitisation.
Year(s) Of Engagement Activity 2016
 
Description Invited talk at GRIS Kolloquium, Technical University of Darmstadt, Germany, 21/4/2016 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Invited talk at post-graduate seminar of that university. The talk was on "Application-Driven Appearance Digitisation" and showcased a number of EPSRC-funded innovations in digitisation and fabrication.
Year(s) Of Engagement Activity 2016
URL http://www.gris.tu-darmstadt.de/home/news/archive/index.en.htm
 
Description Keynote at the 10th ACM SIGGRAPH Finnish Chapter Conference Syysgraph, 11/11/2016 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The keynote speech was extremely well received (I rarely had such a captivated and excited audience). The largest part of the audience were practitioners from the 3D games and special-effects industry. Regardless, the cultural-heritage topic was received as everything _but_ off-topic, and many fruitful discussions, as well as potential knowledge-transfer opportunities arose in the wake of the speech.
Year(s) Of Engagement Activity 2016
URL https://www.facebook.com/events/236500343433508/
 
Description SketchX Workshop (BCS,London) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Attendance of technical meeting with approximately 50 attendees. Poster presented acknowledging the OAK grant for attendance on prior work, aiming to motivate conversation and collaboration for sketch interfaces of landscape generation project.
Year(s) Of Engagement Activity 2016
URL https://www.eventbrite.co.uk/e/bmva-technical-meeting-sketchx-human-sketch-analysis-and-its-applicat...