📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Visual Commensence for Scene Understanding

Lead Research Organisation: University of Glasgow
Department Name: College of Medical, Veterinary, Life Sci

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Publications

10 25 50
publication icon
Jack RE (2017) Toward a Social Psychophysics of Face Communication. in Annual review of psychology

publication icon
Rychlowska M (2017) Functional Smiles: Tools for Love, Sympathy, and War. in Psychological science

publication icon
Chen C (2018) Distinct facial expressions represent pain and pleasure across cultures. in Proceedings of the National Academy of Sciences of the United States of America

publication icon
Schyns PG (2020) Revealing the information contents of memory within the stimulus information representation framework. in Philosophical transactions of the Royal Society of London. Series B, Biological sciences

publication icon
Pichon S (2021) Emotion perception in habitual players of action video games. in Emotion (Washington, D.C.)

publication icon
Schyns PG (2022) Degrees of algorithmic equivalence between the brain and its DNN models. in Trends in cognitive sciences

publication icon
Bjornsdottir RT (2024) Social class perception is driven by stereotype-related facial features. in Journal of experimental psychology. General

 
Description We have developed a new methodology to achieve a deeper interpretability of deep networks. Specifically, using information theoretic measures, we can now visualize the information that is represented at each layer of a deep network. From this understanding, we can better estimate the information transformation function that are performed across layers. Furthermore, we have using Generational Autoencoders to compare the representations constructed on the hidden layers with those of several other models (i.e. classic ResNet DeepNetwork, an engineered generative model and an ideal observer model.
Exploitation Route Others users of deep networks might use our methodologies to better understand why deep networks fail to generalize--cf. adversarial testing.
Sectors Aerospace

Defence and Marine

Creative Economy

Digital/Communication/Information Technologies (including Software)

URL https://arxiv.org/abs/1811.07807