Google’s big data advantage might help it surpass the Galaxy S8’s fledgling Bixby feature.
Google Lens was one of the major announcements of the I/O 2017 keynote, as Google revealed the latest step in its visual search journey. This is an endeavor which can be traced back to Google Image Search years ago, and which is a close relative of the AI powering Google Photos’ object and scene recognition.
As a part of Google Assistant, Google Lens has the potential to reach every Android phone or tablet on Marshmallow and up, letting these devices recognize things visually (with a little help from location data) and conjure up information about them. For example, you might be able to identify a certain flower visually, then bring up info on it from Google’s knowledge graph. Or it could scan a restaurant in the real world, and bring up reviews and photos from Google Maps.
Whether it’s through a camera interface in Google Assistant, or after the fact through Google Photos, the strength of Lens — if it works as advertised — will be the accurate identification and the ability to provide useful info based on that. It’s not too much of a stretch to imagine the feature might well be baked into the Google camera app on the next generation of Pixel phones.
Big, BIG data
Like all the best Google solutions, Lens is a product of AI and data.
Like all the best Google solutions, Google Lens is rooted in big data. It’s ideally suited to Google, with its vast reserves of visual information and growing cloud AI infrastructure. Doing this instantly on a smartphone is a step beyond running similar recognition patterns on an uploaded image via Google Image Search, but the principles are the same, and you can easily draw a straight line to Google Lens, starting with Image Search and going through Google Goggles.
Back in 2011, Google Goggles was impressive, futuristic and in the right setting, genuinely impressive. In addition to increased speed, Google Lens goes a step beyond this by not only identifying what it’s looking at, but understanding it and connecting it to other things that Google knows about. It’s easy to see how this might be extended over time, tying visible objects in photos to the information in your Google account.
The potential for Google Lens is only going to grow as Google’s capabilities in AI and big data increase.
At a more advanced level, Google’s VPS (visual positioning system) builds on the foundations of Google Lens on Tango devices to pinpoint specific objects in the device’s field of vision, like items on a store shelf. As mainstream phone cameras improve, and the trend towards multiple lenses in high-end phones continue, there’s every chance VPS could eventually become a standard Lens feature.
What Bixby Vision should have been
The potential for Google Lens is only going to grow as Google’s capabilities in these areas become stronger. And the contrast with one of the Galaxy S8’s most publicized features is pretty stark. Samsung is still a relative newcomer in AI, and that’s reflected in the current weakness of Bixby Vision.
Right now Bixby can help you identify wine (badly), as well as flowers (sometimes) and animals (to varying degrees of success), and products, through Vivino, Pinterest and Amazon respectively. Samsung doesn’t have its own mountain of data to fall back on, and so it has to rely on specific partnerships for various types of objects. What’s more, while Samsung can (and apparently does plan to) bring Bixby to older phones via a software update, Google could conceivably flip the switch through Assistant and open the floodgates to everything running Android 6.0 and up.
Anyone who’s used Bixby Vision can attest that it just doesn’t work very well, and Google Lens seems like a much more elegant implementation. We don’t yet know how well Lens will work in the real world, but if it’s anywhere near as competent as Google Photos’ image identification skills, it’ll be something worth looking forward to.