Apple apparently assembling team of speech-recognition experts to create neural network-powered Siri
Date: Tuesday, July 1st, 2014, 10:16
Category: News, Software
It’s time to have a more meaningful chat with Siri.
Per Wired, Apple is apparently assembling a team of A-list speech recognition researchers, including high-ranking employees from Nuance, to create an in-house Siri engine based on neural networking.
The article states that Apple has created a group of software engineers and researchers from Nuance, the firm responsible for Siri’s voice-recognition functionality, as well as other companies to work toward a next-generation backbone for the virtual assistant.
The publication points to a number of Apple hires over the past few years, including Nuance’s former vice president of research Larry Gillick and Gunnar Evermann, who is currently working as Siri’s speech project manager.
Speaking to the publication, Microsoft research division head Peter Lee said Apple hired Alex Acero away from the Redmond, Wash. software giant in 2013. Acero is now a senior director on the Siri team.
Neural networks, which functions as a term for machine learning algorithms that work in a manner similar to the brain’s neurons, have also been deployed at IBM, Microsoft and Google in various speech-related applications.
With the reported hires, Lee guesses Apple is likely planning a neural net-powered Siri backbone built entirely in-house.
“All of the major players have switched over except for Apple Siri,” Lee said. “I think it’s just a matter of time.”
Current builds of Apple’s next-generation iOS 8 still include a Nuance-powered Siri, though the virtual assistant has a few tricks up its sleeve with Google Now-style real-time speech-to-text and smart home product control with HomeKit integration, among other enhancements.
Stay tuned for additional details as they become available.