Microsoft studying multilingual speakers so virtual assistants can understand us better
People in multilingual societies such as Hong Kong switch back and forth between languages, which can pose problems for virtual assistants – and Microsoft is working to bridge this human-machine gap

By Saheli Roy Choudhury
Voice-powered virtual assistants, underpinned by artificial intelligence, like Apple’s Siri, Amazon’s Alexa, Google Assistant and Microsoft’s Cortana are becoming regular fixtures in people’s lives. They’re present at homes, on devices and watches and in cars, sending driving directions, weather updates, meeting reminders and the occasional joke or two when prompted.
Beyond that, they remain limited in their ability to hold conversations with users, the same way real people might. Efforts are on, however, to use machine learning and real-time big data analytics to make virtual assistants understand multiple languages, accents, contexts and nuances to hold more human-like conversations.

International Data Corporation predicts global spending on cognitive and AI solutions will see significant investments over the next several years, and could achieve a compound annual growth rate of 54.4 per cent through 2020.
Microsoft, for example, is turning to an unlikely group to bridge the gap between human-machine interactions: bilinguals. Of specific interest is the practice of code-mixing, which is when speakers switch back and forth between multiple languages in a single sentence or conversation. It’s commonly found in multilingual societies.