Terrain modelling

Lead Research Organisation: University of Cambridge
Department Name: Computer Science and Technology

Abstract

The principal goal of the proposed research project is to construct high resolution, accurate elevation maps and evaluate their use cases. In order to achieve this aim, three major research fields are combined utilising the coarse-resolution Shuttle Radar Topography Mission (SRTM) database published by NASA, procedural terrain synthesis and cutting edge computer vision algorithms for 3D modelling. To provide a cheap alternative to expensive aerial photo-based systems, the project uses geo-tagged ground-level view images from the internet that are available freely for non-commercial purposes, including public domain, social media sites and Google Street View.

Realistic Digital Elevation Maps (DEMs) and similar forms of models have been successfully used across a variety of fields including geography, geology, space research and archaeology. The arguable demand for accurate and realistic models is unfortunately rather expensive to meet with the current technology. Geographical Information Systems (GIS) often still favour the usage of LiDAR, a surveying technology requiring an aeroplane equipped with a laser emitter and the corresponding detectors to span the target area. Due to the high sampling rate the accuracy and the precision of these systems is extremely good.

The implicit question that this project is attempting to answer is whether we can recreate a similarly useful and meaningful elevation model from cheaper low-resolution terrain maps by combining them with the latest cutting-edge computer vision algorithms, the ever-increasing computing power, statistical models developed by the game industry and the readily available public domain photographs available on the internet.

As an outcome of this project, multiple terrain synthesis and modelling algorithms including fractal Brownian landscapes, photogrammetry and example-based systems will be considered and combined with online datasets (available for research purposes), and their relative value will be evaluated against a number of use cases. Besides measuring numerical accuracy of the resultant models against high-resolution LiDAR data, we also want to establish a definition of realism and aesthetic value, as well as determine how the constructed elevation maps interact with DEM-based algorithms such as photograph-based location estimation and model-based photograph enhancement.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509620/1 01/10/2016 30/09/2022
1778303 Studentship EP/N509620/1 01/10/2016 31/03/2020 Gyorgy Denes
 
Description After a thorough literature research we discovered that the modelling phase has been rather thoroughly investigated by other authors. Hence we focused on model rendering particuarly in immersive environments (VR).
We developed a technique (Temoral Resolution Multiplexing) for reducing the cost of rendering such models in VR expoliting knowledge of human perception (an initial extension of the project), specifically understanding the limitations of the eye's spatio-temporal resolution.
We also developed visual models for understanding the required bit-depth to render such models.
In terms of aesthctic appearance we are conducting psychophysical experiments to understand how natural images should be tone-mapped for novel (HDR) displays. Findings so far suggest that human observers are surprisingly good at matching physical luminance values, but not so consistent in terms of contrast. Personal preference is yet to be modelled.
Exploitation Route The Computer Graphics community can use these findings to improve rendering algorithms in VR, and hence improving the quality of reducing the financial cost of such systems. Furthermore, the film industry could take an interest in our work on chromatic banding (e.g. film streaming) and the HDR tone mapping appearance for future TV sets.
Sectors Digital/Communication/Information Technologies (including Software)

 
Title Temporal resolution multiplexing display systems 
Description Rendering in virtual reality (VR) requires substantial computational power to generate 90 frames per second at high resolution with good-quality antialiasing. The video data sent to a VR headset requi 
IP Reference GBGB1803260.7A 
Protection Patent granted
Year Protection Granted 2018
Licensed No
Impact Commercial partners were interested (Huawei, DisplayLink, Facebook, Apple), but the development cost and risk of integrating this technology into their hardware deterred them from licensing the patent. Software solutions were not explored, as the benefit of the method is less substantial then.