The company that pioneered a smartphone that can understand you – Apple, with its Siri digital assistant – seems to be lagging behind in terms of providing users with the most accurate speech recognition possible. But according to a new report in Wired, the company has gone on a hiring spree to bring Siri up to snuff.
According to the article, Apple has hired engineers and researchers formerly from Microsoft and Nuance, the latter being the company behind the software that powers Siri in the first place. Alex Acero was a 20 year veteran of Microsoft, and now he’s “a senior director in Apple’s Siri group.” Along with Acero, Apple has hired Gunnar Evermann, formerly of Nuance, along with University of Edinburgh researcher Arnab Ghoshal. The result? A team that’s actively working on making Siri the best it can be.
The post explains that together they’ll be using “deep learning,” and “neural network algorithms” to improve the accuracy of Siri’s voice recognition powers. Thus far, Google Now has surpassed Siri because it relies on this technique, as does Microsoft’s Skype translation and – though it’s not mentioned – probably its Cortana digital assistant for Windows Phone as well.
What could be motivating Apple’s sudden push to improve voice recognition? Probably the impending release of the iPhone 6 – and even more importantly, the long-rumored launch of the iWatch. As we saw during I/O last week, Google’s Android Wear OS is heavily reliant on voice recognition. Apple will need to stay competitive in every way, and Android Wear seems to be the new standard to which wearable operating systems will be compared. If the iWatch can’t understand users as well as a Gear Live, then Apple will be at a serious disadvantage.
Or maybe that should be a “Siri-ous” disadvantage.