Audio Technology
Machine Learning to Improve Speech-in-Noise for Those with Hearing Loss
A common cause of Aural diversity is hearing loss, with 12 million people in the UK having hearing loss in both ears. The main treatment for this is hearing aids, but the performance of these still needs improvement for speech intelligibility in noisy places. Machine learning has great potential to improve speech in noise. This PhD is designed to build on the The Clarity Project (https://claritychallenge.org/) which has been running a series of machine learning challenges to improve the processing of speech in noise on hearing aids.
Your PhD will involve developing and running new challenges working with members of the Clarity team. Opportunities for novel research will arise from this. To give two examples: (1) You might develop a better hearing loss model for machine learning to be part of a software baseline, or (2) You might develop new listening test methods that allows more ecologically-valid assessment of the audio. Applicants for this PhD need to be competent coders. Preferably with experience of using Python, Git and machine learning frameworks
Supervisors: Trevor Cox and Ian Drumm
Perception of Acoustics in VR, AR, video games and e-sports for aurally diverse users
There’s a proliferation of virtual display technologies for entertainment, training and marketing, among other applications. Typically, content in these platforms does not comply with needs for those with diverse aural perception modes. This project will research how these technologies may be made more inclusive for those with hearing difference. The project will entail creation of environments in VR/AR platforms and the design of subjective testing to understand response to inclusive strategies. Skill in programming, production of immersive audio environments and design and deployment of subjective testing are desirable.
Supervisor: Bruno Fazenda
Individualised remixing of the sonic environment
Some aurally divergent individuals can find many environmental sounds as being disturbing, confusing or even painful to hear. Recent work in AI has improved the capability of sound source separation, sound (e.g., speech) enhancement and reduction (e.g., background noise). This project would investigate deep learning techniques for sound identification and separation with the aim of facilitating real time rebalancing of environmental sounds based on individual requirements or needs.
Supervisors: Ben Shirley and Chris Hughes
Assistive listening in an Auracast era
Auracast is a new broadcast audio technology which promises to revolutionise access to audio in private and public spaces. The technical specifications define how devices communicate, but not how they should be used. Leaning on research in acoustics, technology and psychology this project offers an opportunity to explore the perceived benefits of assistive listening and what system designers, installers and managers need to do to ensure that these benefits are delivered for those with hearing diversity.
Supervisor: Trevor Cox