Interview: Movidius CEO Talks Google Partnership, Advanced Computer Vision in Next-Gen Devices
Today, Google announced a partnership with San Mateo, California-based vision processor firm Movidius to enable the next generation of apps to better see and comprehend the world. The team-up is part of Google’s Project Tango, the tech giant’s effort to make our smart-devices truly smart—and Movidius’s Myriad 1 vision processor architecture is a major part of that plan.
Johnny Lee, Google’s Technical Program Lead in the Advanced Technology & Projects Division, explains Project Tango’s mission in the press release about the partnership with Movidius that was published today:
“Project Tango strives to give mobile devices a human-like understanding of space and motion through advanced sensor fusion and computer vision, enabling new and enhanced types of user experiences—including 3D scanning, indoor navigation and immersive gaming.”
But to achieve such a worthwhile goal as giving a computer the ability to see and understand its environment, one would need some major upgrades to the capabilities of the current crop of processors. Fortunately, it seems that Movidius is up to the task.
The company claims that its Myriad 1 vision processor boasts ten times the computer vision processing power and speed as today’s processors, while consuming only a fraction of the energy. If the Myriad 1 can do all that Movidius says it can, our smart-devices won’t just capture images of the world around us—they’ll actually understand it.
Company CEO Remi El-Ouazzane explained how the Myriad 1 will help Project Tango’s achieve its goals of smarter-devices in an interview with BestTechie.
“My company will not stop working until we can give your mobile device as accurate a view as what your two eyes and brain can do on a daily basis,” he said.
El-Ouazzane explained the Myriad 1 as being comprised of three elements: intelligent vision, high efficiency, and, quite simply, enabling “cool apps.”
“We offer a portfolio of development tools and software libraries to enable the building of new vision-based applications,” he said, “or, if you prefer, applications which are leveraging the visual sensing that we are enabling.”
He provided an example of what the Myriad 1 will accomplish when utilized by an app developer:
“When you take a tablet, and you get cameras in your tablet, which essentially allows you to capture the depth of a scene or to capture a lot of scene intelligence, like tracking of features, or matching of features, or recognition of features on top of the depth map, then your game can take advantage of this,” he said. “Your game can localize you in space, can localize you in motion, and as such can immerse virtual objects in the real scene you’re capturing. And that persistent gaming, that immersive gaming type of application is now becoming real with those systems.
“We can provide a lot of metadata to the game engine about the scene which is being captured which can make the game in your mobile phone really immersive.”
But games aren’t the only application for the Myriad 1’s power.
“Some applications can also be around indoor navigation, because you can fundamentally use this technology to bring indoor navigation in GPS denied areas where you’re using your visual sensing to navigate in space,” said El-Ouazzane.
“Other applications can be something which has a better impact for society. You can think of devices for visually impaired people who could fundamentally carry around their neck their mobile device, for example. And this mobile device will extract the intelligence from the scene and tell this person ‘please stop,’ ‘it’s a red light,’ ‘you are a hundred meters away from the crossing line,’ ‘there is a step—be careful.’
“You can actually bring a lot of contextual awareness for visually impaired people—that’s another application.”
When asked about whether the Myriad 1 will be used in some of Google’s higher profile initiatives, like Google Glass, the self-driving car, or the recently announced smart-contact lenses, El-Ouzzane said simply, “I can’t talk about that.” But while he couldn’t speak to other specific projects, he added that the possible uses for the Myriad 1 are potentially limitless.
“The amount of momentum we have in the context of multiple wearable applications is very large,” he said. “As to the specifics with what Google wants to do with this technology outside of Project Tango, I’m really not at liberty to talk about it.
“But if you want to augment your daily life with visual sensing, you better make sure that your visual sensing is extracting the intelligence from the scene so that the smartphone in your pocket can actually get activated when it matters.
“And that vision, no pun intended, has been the vision of my company since we started seven years ago,” he added.
In short, the Myriad 1 and the partnership with Google is a validation of those seven years of collective work put in by El-Ouazzane and Movidius.
“I’m trying to give you a more elaborate answer than ‘it feels great,’” said El-Ouzzane on the occasion of the announcement.
“At the end of the day, we are enabling a brand new set of applications, but working with someone who has such an impact on the application developer’s community is a big deal.”
Hopefully it won’t be long before we can see El-Ouzzane and Google’s vision—computer and otherwise—for ourselves.
Sign in or become a BestTechie member to join the conversation.
Just enter your email below to get a log in link.