On stage at Google IO 2017 today, Google announced Google Lens, the culmination of major improvements the company has made in machine learning. Leveraging Google’s computer vision tech, Google Lens can use your phone’s camera to identify objects around and give you more information about them.
Google Lens can understand what you’re looking at, and Google had some pretty interesting examples on stage to show that off. For example, you can show Lens a flower, and it will serve up more information on what the flower is. A slightly more impressive demo showed Lens recognizing the default Wi-Fi SSID information stored on a routers sticker and using that information to automatically log you into the network.
On top of that, Lens can use your location, orientation and street view to bring up more information about restaurants and other businesses around you. Think of it like Samsung’s Bixby, but better.
All of this is part of what Google CEO Sundar Pichai describes as the company’s shift from a mobile-first world to AI-first. That push has moved Google towards “an inflection point with vision,” Pichai said.
Google Lens will arrive first as part of Google Assistant in Photos.
For more from Google I/O, keep following along with our liveblog.