Autonomous Systems for Sound Integration and GeneratioN (ASSIGN)
Lead Participant:
RPPTV LIMITED
Abstract
In immersive media and game sound design, the biggest challenge is the effort required to source the sounds and integrate them with the timeline and visual content. We propose an intelligent decision-making system in a system that generates sounds (with their immersive context) from other sensor data. The Autonomous Systems for Sound Integration and GeneratioN (ASSIGN) project exploits innovative vision-based object recognition technologies to control sound synthesis techniques, so that captured video information can drive sound generation, placement and perspective. This parallels visual effects and computer games, where rendering is driven by high level information, e.g., if a man drops a glass, we see it falling in the virtual world of the game, film or augmented reality. The animation is a property of the object, and sound effects should follow this same paradigm. The business potential is compelling, since ASSIGN could revolutionise the sound design process. Outputs will include a prototype for autonomous sound effect generation, with market analysis, business models and road map to launch a commercial service.
Lead Participant | Project Cost | Grant Offer |
---|---|---|
RPPTV LIMITED | £219,554 | £ 153,688 |
  | ||
Participant |
||
QUEEN MARY UNIVERSITY OF LONDON | ||
MIXED IMMERSION LTD | £51,963 | £ 36,374 |
QUEEN MARY UNIVERSITY OF LONDON | £116,428 |
People |
ORCID iD |
Will Buchanan (Project Manager) |