How Microsoft Teams uses AI and machine learning to improve calls and meetings

Microsoft Teams

Disruptive echo effects, poor room acoustics, and choppy video are some common issues that hinder the effectiveness of online calls and meetings. Through AI and machine learning, which have become fundamental to our strategy for continual improvement, we’ve identified and are now delivering innovative enhancements in Microsoft Teams that improve such audio and video challenges in ways that are both user-friendly and scalable across environments.

Microsoft announced announcing the availability of new Teams features including echo cancellation, adjusting audio in poor acoustic environments, and allowing users to speak and hear at the same time without interruptions. These build on AI-powered features recently released like expanding background noise suppression.

Video: https://www.microsoft.com/en-us/videoplayer/embed/RE4Zccg

Echo cancellation

During calls and meetings, when a participant has their microphone too close to their speaker, it’s common for sound to loop between input and output devices, causing an unwanted echo effect. Now, Microsoft Teams uses AI to recognize the difference between sound from a speaker and the user’s voice, eliminating the echo without suppressing speech or inhibiting the ability of multiple parties to speak at the same time.

“De-reverberation” adjusts for poor room acoustics

In specific environments, room acoustics can cause sound to bounce, or reverberate, causing the user’s voice to sound shallow as if they’re speaking within a cavern. For the first time, Microsoft Teams uses a machine learning model to convert captured audio signal to sound as if users are speaking into a close-range microphone.

Continue reading >>