Perceptual Rendering of Vertical Image Width for 3D Multichannel Audio

Lead Research Organisation: University of Huddersfield
Department Name: Sch of Computing and Engineering

Abstract

Conventional surround sound systems such as 5.1 or 7.1 are limited in that they are only able to produce a two-dimensional (2D) impression of auditory width and depth. Next generation surround sound systems that have been introduced over recent years tend to employ height channel loudspeakers in order to provide the listener with the impression of a three-dimensional (3D) soundfield. Although new methods to position (pan) the sound image in the vertical plane have been investigated, there is currently a lack of research into methods to render the perceived vertical width of the image. The vertical width rendering is particularly important for creating the impression of a fully immersive 3D ambient sound in such applications as the production of original 3D music/broadcasting content and the 3D upmixing of 2D content. This project aims to provide fundamental understandings of the perception and control of vertically oriented image width for 3D multichannel audio. Three objectives have been formulated to achieve this aim: (i) to determine the frequency-dependent perceptual resolution of interchannel decorrelation for vertical image widening; (ii) to determine the effectiveness of 'Perceptual Band Allocation (PBA)', a novel method proposed for vertical image widening; (iii) to evaluate the above two methods in real-world 2D to 3D upmixing scenarios. These objectives will be achieved through relevant signal processing techniques and subjective listening tests focussing on perceived spatial and tonal qualities. Data obtained from the listening tests will be analysed using robust statistical methods in order to model the relationship between perceptual patterns and relevant parameters. The results of this project will provide researchers and engineers with academic references for the development of new 3D audio rendering algorithms, and will ultimately enable the general public to experience a fully immersive surround sound in the home-cinema, car and mobile environments.

Planned Impact

The proposed research investigates methods to render stereophonic image width across vertically arranged loudspeakers, which will ultimately enable the creation of immersive 3D auditory sensation through sound reproduction utilising loudspeaker formats employing height channels. This project transfers signal decorrelation methods that have been widely used for horizontal stereo width rendering to a vertical dimension. Furthermore, it explores a novel vertical width rendering method named 'Perceptual Band Allocation' (PBA), which is optimised for vertical stereophony in terms of perceived tonal quality. The most immediate economic and societal impacts are expected through uptake of the investigated methods by the following four sectors: professional audio R&D; consumer audio R&D; music production/broadcasting; and electroacoustic music composition. The following sections describe how they will benefit from this project. Please see Pathway to Impact for related impact activities planned.

1) Impact on pro audio R&D

The results of this project will become the basis for pro audio companies to develop new sound mixing tools for 3D music production and broadcasting (e.g. 3D reverberator, 3D image widening tool, etc.). The extensive data sets produced from this project will determine the effectiveness of each investigated method in vertical width rendering for various experimental conditions (frequency band, loudspeaker position and sound source characteristics). Audio engineers will greatly benefit from this in terms of the effective design and efficient implementation of a new rendering algorithm; e.g. the data will be references as to which parameters to focus on depending on the target loudspeaker position or source type. Furthermore, since this project examines the tonal quality of rendered sound as well as the spatial quality for each experimental condition, the results of this research will help the engineers to ensure the overall sound quality of the implemented algorithm.

2) Impact on consumer audio R&D

This project will benefit consumer Hi-Fi audio manufacturers. For example, the results of this project will be useful for the development of new 2D to 3D upmixing algorithms for home-cinema AV receivers. In combination with an existing source/ambience separation technique (e.g. Principal Component Analysis), the investigated methods can be applied to the ambient parts of original 2D signals to create the impression of environmental width in the height dimension. This will enable consumers to experience the impression of 3D Listener Envelopment from 2D-encoded contents in the home-cinema environment. Car audio companies will be another beneficiaries of this project. The experiments conducted in this project can be repeated in a similar manner for car audio systems. This will enable the design of new 3D upmixing algorithms which are optimised for specific loudspeaker arrangements in automobile cabins. Last but not least, the methods investigated in this project can also be combined with an existing binaural synthesis technique to develop a new 3D sound engine for mobile devices.

3) Impact on music production and broadcasting

Sound engineers in music production and broadcasting will be important end-users of the audio mixing tools developed based on this project. Currently, there exists no software plug-ins to flexibly control the perceived vertical width of sound image in music mixing. This project will enable the sound engineers to fully utilise the added height dimension of 3D multichannel audio to create the impression of a fully immersive soundfield.

4) Impact on electroacoustic music composition

The tools developed from this project will help electroacoustic music composers who exploit multichannel spatialisation techniques to express their musical ideas more creatively in 3D. This will ultimately impact the way music is presented to the audience in concerts.
 
Title 3D recording of Organ at the Huddersfield Town Hall 
Description An organ performance by Dr Gordon Stewart on the 1870 Phillis organ at the Huddersfield Town Hall has been recorded in 9.1 multichannel 3D format. This recording exploited all of the three key methods developed from the EPSRC project: (i) 3D microphone array configuration (ii) Perceptual Band Allocation (PBA) and (iii) Virtual image elevation. The result of the recording was satisfactory. It was possible to represent the actual acoustics of the concert hall in reproduction using 9.1 channel loudspeakers. The vertical dimension of the organ was successfully represented in the recording. 
Type Of Art Artefact (including digital) 
Year Produced 2015 
Impact It is expected that this recording project will lead to a commercial release of 3D organ recording in the Auro-3D or Dolby Atmos Bluray format. The recording will be demonstrated in future conferences and workshops to show sound engineers and developers the merits of the new 3D recording and rendering techniques developed from the EPSRC project. 
 
Title Live concert recording in 3D for Korean Chamber Orchestra 
Description This audio recording has been made for Korean Chamber Orchestra's concert at Queens Elizabeth Hall in London in Feb 2015. The recording was made using a novel microphone technique for 3D reproduction. 
Type Of Art Artefact (including digital) 
Year Produced 2015 
Impact The recording technique used for this recording has been demonstrated to audio engineers at a number of international conferences. Schoeps, one of the most famous microphone manufacturers, adapted the main concept of the technique for their new product ORTF-3D. 
 
Title Pure Audio Blu-ray album release of Siglo de Oro choir 
Description 3D recordings made using the PCMA-3D microphone technique based on one of the journal publications from the project (Lee and Gribben 2014) has been released in the Pure Audio Blu-ray format by Delphian Records. The album includes the recordings encoded for the latest 3D audio technologies such as Dolby Atmos and Auro-3D as well as 5.1 and stereo. 
Type Of Art Artefact (including digital) 
Year Produced 2018 
Impact It is expected that this album will create an impact on the dissemination of one of the most important research findings from the funded project - vertical microphone array configuration for 3D sound capture and reproduction. 
 
Description 1. The perceptual mechanism of the so-called Pitch-Height effect for virtual auditory images has been revealed. Formal experimental data on the perceived vertical positions of octave-band filtered virtual images have been provided for different azimuth angles. It has been found that the nature of virtual source elevation localisation is significantly different from that of real source elevation localisation.
2. It has been shown that the aforementioned vertical image position data can be successfully exploited for rendering different degrees of vertical image spread. This method has been tested for the 2D to 3D sound upmixing of ambient sound. The results showed that the method was subjectively preferred to other conventional methods.
3. The association between the loudspeaker base angle and the perceived image elevation has been investigated in depth. It was generally shown that the perceived image is elevated from the front to above of the listener as the loudspeaker base angle increases from 0 degree to 180 degrees. It was newly found that the effect significantly depends on the spectral and temporal characteristics of the sound source. Sources with a broad and Specifically, frequency bands centred around 500Hz and 8kHz were found to have the strongest elevation effect. These findings have important implications for practical applications such as 3D sound rendering, upmixing and downmixing.
4. A novel theory that ultimately explains the reason for the virtual image elevation effect has been established. Whilst the conventional theory based on the psychophysics of pinnae spectral distortion is limited to explaining the effect for high frequencies, the proposed theory is based on the brain's cognitive interpretation of ear-input signals is able to explain the effect for low frequencies also.
5. A new 3D panning method named "virtual hemispherical amplitude panning (VHAP)" has been developed based on the phantom image elevation effect. This method allows one to render phantom images over the upper hemisphere without using physically elevated loudspeakers; only four loudspeakers on the horizontal plane are required.
Exploitation Route 1. The PBA method can be used for developing 3D upmixing systems by audio software developers. The main application areas include home theatre audio-visual receiver (AVR), car audio, and virtual reality over headphones.
2. The virtual image elevation method can be useful for 3D to 2D audio downmixing for home environment where height loudspeakers are not available. The virtual-hemispherical amplitude panning (VHAP) method developed by exploiting the elevation principle can be used for 3D audio object panning without using height channels. This method can also be useful for binaural rendering for virtual reality applications.
Sectors Creative Economy,Digital/Communication/Information Technologies (including Software),Electronics,Manufacturing, including Industrial Biotechology

URL http://www.hud.ac.uk/apl
 
Description This research innovated the way to design a 3D microphone array for immersive sound recording. Novel psychoacoustic principles established from the research allowed substantial reduction of vertical microphone spacing compared to conventional arrays, whilst improving the sense of realism and spatial impression. In 2017, this was commercially exploited as the design basis for the award-winning ORTF-3D microphone array by Schoeps, a world-renowned German microphone manufacturer. This array has been used for broadcasting major events such as FIFA World-Cup, BBC Proms and French-Open. Other microphone arrays developed based on the research are being used by sound engineers at Abbey Road Studios (UK), MagicBeans (UK), ORF (Austria), Arizona PBS (USA) and Sydney Opera House (Australia).
First Year Of Impact 2014
Sector Digital/Communication/Information Technologies (including Software),Education,Electronics,Environment,Manufacturing, including Industrial Biotechology,Culture, Heritage, Museums and Collections
Impact Types Cultural,Societal,Economic

 
Description Volumetric audio synthesis for AR - VASAR
Amount £326,299 (GBP)
Funding ID 105175 
Organisation Innovate UK 
Sector Public
Country United Kingdom
Start 03/2019 
End 03/2021
 
Title Huddersfield Universal Listening Test Interface Generator 
Description HULTI-GEN takes user-defined parameters (e.g. the type of scale, the number of trials and stimuli, randomisation, settings for reference and anchors, etc.) and automatically constructs a GUI suitable for the requirement of the listening test. This allows the user to quickly create various types of multiple and pair-wise comparison test environments. To assist the user, HULTI-GEN also provides a number of presets based on ITU-R recommended methods such as MUSHRA and ABC/HR. The user can also flexibly edit these presets to adjust the recommended methods for different test requirements (e.g., adding audible anchors, removing hidden reference, etc.). Subject's responses are saved as a text file, which can be easily imported into Excel for data analysis. HULTI-GEN supports the playback of multichannel wav. file of up to 28 channels, which is useful for multichannel 3D sound quality evaluations. ? 
Type Of Material Improvements to research infrastructure 
Year Produced 2015 
Provided To Others? Yes  
Impact Over 300 downloads have been made since the release of the tool. The tool is being used by many researchers in the field of perceptual audio evaluation. 
URL http://eprints.hud.ac.uk/id/eprint/24809/
 
Title LED controller for localisation test 
Description Most auditory localisation tests typically use visual number scale or laser pointer as a response method. However, the number scale method has a visual bias. The laser pointer method is more intuitive than the others but requires a motion tracking device and a careful calibration. In order to overcome such limitation, I developed a new method using an LED strip and a wheel controller, which are controlled using the Arduino microcontroller and Max7 software. This method allows the test subject to move a single LED position to the corresponding perceived image position using the wheel controller. The position data is collected and saved by the software to produce a text file. This method has been used for a vertical localisation test and was shown to be very efficient. The subjects found it very easy and intuitive to use the method compared to the number scale method. The consistency of subjects responses was found to be significantly better than the number scale method. The test time spent using the method is also dramatically shorter than that using the number scale method. Additionally, the method also allows the subject to overlay the multiple LED positions to different positions. This is useful when judging the left/right or lower/upper boundaries of the perceived image as well as perceived image position. 
Type Of Material Improvements to research infrastructure 
Year Produced 2016 
Provided To Others? Yes  
Impact This method is expected to become a useful resource for spatial audio researchers. 
 
Title Microphone Array Impulse Response (MAIR) Library and Renderer 
Description MAIR is an open-access library of an extensive set of room impulse responses (RIRs) captured using numerous microphone arrays from 2-channel stereo to 9-channel surround with height. The RIRs were obtained for 13 loudspeakers placed in various positions on the stage at St.Paul's concert hall in Huddersfield, UK (RT60 = 2.1s). The library features five 2-channel stereo pairs, 10 main surround arrays, nine height microphone arrays for 3D main arrays and 15 4-channel configurations for surround and 3D ambience arrays, each with varied microphone polar patterns, directions, spacings, and heights. A dummy head and a first-order-Ambisonics microphone are also included. The library is provided with a rendering tool, with which the user can easily simulate different microphone combinations in both loudspeaker and binaural playback for 13 source positions. The audio inputs for the rendering tool can be directly fed from a DAW session as well as by manual file allocation. 
Type Of Material Database/Collection of data 
Year Produced 2017 
Provided To Others? Yes  
Impact It is expected that the database will become useful resources to spatial audio researchers, students and sound engineers for their research on 3D sound. 
URL https://github.com/APL-Huddersfield/MAIR-Library-and-Renderer
 
Description 3D loudspeaker configuration project with Surrey and York Universities 
Organisation University of Surrey
Country United Kingdom 
Sector Academic/University 
PI Contribution - Discussed and generated future collaboration research ideas on 3D loudspeaker configuration.
Collaborator Contribution - Discussed and generated future collaboration research ideas on 3D loudspeaker configuration.
Impact - Generated detailed ideas for future collaborative project on 3D loudspeaker configuration. - This collaboration is ongoing. A future funding application is being prepared.
Start Year 2015
 
Description 3D loudspeaker configuration project with Surrey and York Universities 
Organisation University of York
Country United Kingdom 
Sector Academic/University 
PI Contribution - Discussed and generated future collaboration research ideas on 3D loudspeaker configuration.
Collaborator Contribution - Discussed and generated future collaboration research ideas on 3D loudspeaker configuration.
Impact - Generated detailed ideas for future collaborative project on 3D loudspeaker configuration. - This collaboration is ongoing. A future funding application is being prepared.
Start Year 2015
 
Description Automotive 3D audio with Volvo Cars 
Organisation Volvo Trucks
Country Sweden 
Sector Private 
PI Contribution - Provided consultancy on building a 3D reproduction system. - Provided various types of 3D recordings for demonstration and listener training purposes. - Conducted subjective evaluations on various 3D upmixing methods for car audio.
Collaborator Contribution - Provided expert feedbacks on 3D recordings and upmixing methods. - Provided tele-conference meetings. - Generated further collaborative research topics.
Impact - Produced detailed specifications for 3D audio reproduction system for automotive 3D audio research. - Evaluated various types of 3D upmixing algorithms in terms of perceptual attributes and subjective preference. - Generated further collaborative research topics on automotive 3D audio.
Start Year 2016
 
Description CeReNeM 
Organisation University of Huddersfield
Department Centre for Research in New Music (CeReNeM)
Country United Kingdom 
Sector Academic/University 
PI Contribution I have contributed to the development of new collaborative research topics in 3D spatialisation for electroacoustic music composition.
Collaborator Contribution Free use of equipment - microphones, amplifiers and studios. Free support of sound sources - musicians and instruments
Impact Development of a new research topic in 3D spatialisation - Psychoacoustics, music composition, musicology, software development
Start Year 2014
 
Description Knowledge transfer to Abbey Road Studios 
Organisation Abbey Road Studios Ltd
Country United Kingdom 
Sector Private 
PI Contribution I provided consultancy on 3D microphone techniques for immersive sound recording for 6-degrees-of-freedom (6DoF) VR application that Abbey Road (AR) created. I helped on two recording sessions at AR studio 1 and 3.
Collaborator Contribution Abbey Road studio's sound engineer/Head of Products Mirek Stiles created a 6DoF VR application using the recordings made in collaboration.
Impact A 3D microphone array named ESMA-3D, developed based on some of the project's main findings, was adopted by AR engineers as a main array for recording a large orchestra for VR. The collaboration is multi-disciplinary (audio engineering, psychoacoustics and music).
Start Year 2018
 
Description Knowledge transfer to Central Sound Arizona PBS 
Organisation Arizona State University
Country United States 
Sector Academic/University 
PI Contribution I transferred knowledge about 3D recording techniques developed from this project to Alex Kosiorek, chief sound engineer at Central Sound Arizona PBS/Arizona State University. I had several meetings with him, discussing the pros and cons of 3D different microphone techniques, and shared recording resources to help him find optimal microphone setup for his recording sessions for public radio broadcasting.
Collaborator Contribution Alex Kosiorek at Central Sound made live recordings of Phoenix Chorale's concert using a microphone array designed based on the PCMA-3D concept, which is one of the main outcomes of the project.
Impact PCMA-3D has been adopted as one of Central Sound's "go-to" microphone arrays for 3D sound recording. This collaboration is multi-disciplinary (psychoacoustics, audio engineering and music).
Start Year 2018
 
Title HAART 
Description HAART (Huddersfield Acoustical Analysis Research Toolbox) is an all-in-one-box software that simplifies the measurement and analysis of multi-channel impulse responses (IRs). It is able to perform the acquisition, manipulation and analysis of IRs using subjective and objective measures described in acoustics literature. HAART is also able to convolve IRs with audio material and, most importantly, able to binaurally synthesise virtual, multichannel speaker arrays over headphones, negating the need for multichannel setups when out in the field. 
Type Of Technology Software 
Year Produced 2015 
Open Source License? Yes  
Impact HAART was made available for free download in Aug 2015. HAART significantly improves the conventional workflow of impulse response capturing and acoustic analysis, and therefore it is expected to hugely benefit academics, students and researchers in the field of room/hall acoustics, psychoacoustics and spatial audio. 
URL http://eprints.hud.ac.uk/24579/
 
Title HULTI-GEN 
Description HULTI-GEN (Huddersfield Universal Listening Test Interface Generator) is a user-customisable environment, which takes user-defined parameters (e.g. the number of trials, stimuli and scale settings) and automatically constructs an interface for comparing auditory stimuli, whilst also randomising the stimuli and trial order. To assist the user, templates based on ITU-R recommended methods have been included. As the recommended methods are often adjusted for different test requirements, HULTI-GEN also supports flexible editing of these presets. Furthermore, some existing techniques have been summarised within this brief, including their restrictions and how they might be altered through using HULTI-GEN. 
Type Of Technology Software 
Year Produced 2015 
Open Source License? Yes  
Impact Since this software enables one to create a listening test GUI flexibly and quickly, it is expected to have a high impact in academic and research society. Researchers and students who work in subjective audio evaluation can greatly benefit from the software. Since this software became available in June 2015, the download count on the website is increasing rapidly every month. It is expected that the impact will continuously grow. 
URL http://eprints.hud.ac.uk/24809/
 
Title Immersive Audio Renderer (IAR) 
Description IAR is a VST Plugin for 3D audio production. Developed by Leo McCormack and Hyunkook Lee at APL, the current version of IAR offers 3D panning (VBAP and DBAP) over 9.1 3D loudspeaker setup and the headphone externalisation of the loudspeaker signals. It also provides an innovative novel 3D GUI with multiple points of view. 
Type Of Technology Software 
Year Produced 2015 
Open Source License? Yes  
Impact Since September 2015 the software has been downloaded 28 times. 
URL http://www.hud.ac.uk/research/researchcentres/mtprg/projects/apl/resources/iar/
 
Description 3D audio tutorial/demo - Audio Engineering Society 138th International Convention 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The aim of this activity was to help people understand the psychoacoustic principles of 3D sound and their practical applications. It included many practical audio demos involving 3D loudspeaker setup. The talk/demo received very good feedbacks.

I think the talk/demo certainly raised the awareness of the benefit of 3D sound, and helped people expand their knowledge about how to apply different signal processing methods to produce good quality 3D sound. After the talk/demo, a number of people came to me to give positive feedbacks or to have further discussion.
Year(s) Of Engagement Activity 2015
 
Description 3D audio tutorial/demo - Audio Engineering Society 139th International Convention in New York 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The tutorial/demo session was highly successful. The demo room was full and the audience was engaged well in the session. After the session a number of people came to give me positive feedbacks and ask questions.

After the tutorial/demo session a number of industry experts gave highly positive feedbacks. Particularly people from Sennheiser, Volvo and Panasonic showed great interests in the research I am conducting and asked me to send them my papers and audio samples. They also said they would like to visit my lab when they next come to the UK.
Year(s) Of Engagement Activity 2015
URL http://www.aes.org/events/139/spatialaudiodemos/?ID=4781
 
Description HAART demo - Audio Engineering Society 138th International Convention 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The talk and demo was very successful. It dragged attention of many people and received positive and constructive feedbacks.

After the talk/demo of the HAART software, many people said the software would be highly useful for their research and teaching. They signed up for an emailing list to get further information about downloading and future update. The talk also gave opportunities to meet and discuss with many leading academics.
Year(s) Of Engagement Activity 2015
 
Description HULTI-GEN demo - Audio Engineering Society 138th International Convention 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact This talk was received very well. Many people were interested in the HULTI-GEN software demonstrated and gave positive feedbacks.

A number of people signed up an emailing list for getting future update information of the tool. This presentation/demo also led conversations and discussion with many leading people in the audio industry. Further, it helped raising the awareness of the current EPSRC-funded research being conducted at the University of Huddersfield.
Year(s) Of Engagement Activity 2015
 
Description Invited panel for roundtable discussion at the 3rd International Conference on Spatial Audio 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Participants in your research and patient groups
Results and Impact I participated in a roundtable discussion on the future of 3D sound as an invited panel member. I mainly shared my view and provided discussion points on binaural 3D sound.

The participation gave me the opportunity of introducing my current research to a large audience from the industry. After the panel discussion a number of people from the industry agreed on the points I made during the discussion. It also enabled me to continue research discussion by emails with some of the key people from the industry such as Tom Holman of Apple and Gunther Theile of IRT.
Year(s) Of Engagement Activity 2015
 
Description Invited talk at AES Midland workshop on Intelligent Music Production 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Participants in your research and patient groups
Results and Impact Current research on semantic audio mainly focuses on the tonal and dynamic aspect of audio, but there is no much research conducted on the semantics of 3D audio. My talk on "intelligent 3D music production" led to a discussion on a collaborative research for semantic audio in spatial audio.

After my talk, several people from Queen Mary University and Birmingham City University showed a great interest in my research area and discussed future collaboration. Also a person from Music Group, one of the largest music technology companies in the industry, would like to visit my 3D sound lab in Huddersfield. I was invited for another talk at a future event on the same topic.
Year(s) Of Engagement Activity 2015
URL http://www.aes-uk.org/forthcoming-meetings/aes-midlands-workshop-on-intelligent-music-production/
 
Description Invited workshop/demo on 3D sound at the 3rd International Conference on Spatial Audio (ICSA) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Participants in your research and patient groups
Results and Impact The workshop/demo received very positive feedbacks from many industry and academia leading experts in spatial audio including Gunther Theile of IRT, Tom Holman of Apple and Florian Camerer of ORF, Robert Sazdov of Fraunhofer IIS and Franz Zotter of IEM Graz.

The workshop led to some constructive discussion on the improvement of the current 3D loudspeaker layout among people mentioned above. It also greatly raised the awareness of the importance of the current EPSRC project I am working on.
Year(s) Of Engagement Activity 2015
 
Description PBA presentation - Audio Engineering Society 138th International Convention 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The presentation was about the main method called 'Perceptual Band Allocation (PBA)', which is being investigated for the current EPSRC project. It went very well, raising interesting questions and constructive discussion.

This talk gave an opportunity to raise the awareness of the current EPSRC project and also discuss with some of the leading academics.
Year(s) Of Engagement Activity 2015
 
Description Presentation on virtual image elevation effect at the 139th AES convention in New York, 2015 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The talk was about my research findings on virtual auditory image elevation. After the presentation I received many positive feedbacks from industry experts. In particular they all supported my new theoretical explanation on the reason for the perception of virtual image elevation.
Year(s) Of Engagement Activity 2015
URL http://www.aes.org/e-lib/browse.cfm?elib=17997
 
Description Tonmeistertagung conference 2014 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The presentation raised high interest among some renowned academics in the current EPSRC funded research I am conducting. Many people who attended the talk agreed on the importance of the research.

This talk led to conversations with many leading academics, research institutes and companies about the topic area. Particularly IEM in Austria and Fraunhofer IIS in Germany showed interests in future collaboration, and this is being discussed.
Year(s) Of Engagement Activity 2014