Advertisement

Google Lens offers a clear view of the company’s future

Adding vision to its growing AI prowess is just the start.

Engadget

Google Lens is both a return to form for the search giant and a tantalizing glimpse into what lies ahead. Google's early claim to fame was its ability to efficiently index the web and fetch search results quickly, bringing some much-needed organization to the chaotic early days of the internet. Lens, similarly, uses computer vision and AI to make sense of your photos, videos and the real world. It's basically Google search for everything outside of screens.

Most intriguingly, Lens is yet another way for Google to expand on its original mission statement: "to organize the world's information and make it universally accessible and useful."

Though we've only seen a brief, pre-produced demonstration of Lens, it looks compelling. Through the Google Assistant, it can identify the type of flower you're looking at or highlight reviews and ratings when you're pointing your phone at a storefront. We've seen glimpses of these capabilities in Google Goggles and Yelp's Monocle, both of which showed off the potential of AR but were too early to be genuinely useful. Now, with the advantage of improved computer-vision algorithms, better cameras and more capable devices and networks, Google is in a much better place to make AR an essential computing tool.

With Lens's more-advanced capabilities, we're seeing just how far Google's AR technology has come. Pointing your camera at a concert-venue marquee, for example, lets you easily buy tickets or add events to your calendar. You could also quickly join a WiFi network by focusing Lens on a router's login information (of course, that assumes nobody changed the network name or password).

If you want to understand what makes Lens truly important, rather than just a whiz-bang keynote demo, you have to look at where computing is headed. We're moving away from older input mechanisms, like keyboards and mice, and toward things like voice commands and computer vision. Devices of the future will need to see and hear the world to make sense of it. And they'll need to do all of that without any user input.

With Lens, Home and its larger focus on AI, Google is setting itself up to for the next big wave of computing. We're already seeing Amazon invest in that with Alexa and its Echo devices; you can think of its camera-equipped Echo Look as a big step toward refining its computer-vision capabilities. Microsoft also made it clear that it's exploring all of these new facets of computing at its Build conference last week. The company is bringing Cortana to more devices and relying heavily on deep learning and computer vision with Story Remix.

While Lens has loads of potential, there are also reasons to be skeptical. Google's AI capabilities have fallen short in the past, like when its Photos app mistakenly labeled black people as "gorillas." As we rely more on technology to catalog and define the world's information, companies like Google will have to make sure their algorithms reflect the nuances of human identity. Hiring a more diverse workforce would be a good a start.

For all the latest news and updates from Google I/O 2017, follow along here.