Google Lens is an image recognition mobile app developed by Google. First announced during Google I/O 2017, it is designed to bring up relevant information using visual analysis.
Google I/O (simply I/O) is an annual developer conference held by Google in Mountain View, California. I/O showcases technical in-depth sessions focused on building web, mobile, and enterprise applications with Google and open sources such as Android, Chrome and Chrome OS, APIs, Google Web Toolkit, App Engine, and more.
Google Lens is an AI-powered technology that uses your smartphone camera and deep machine learning to not only detect an object, but understand what it detects and offer actions based on what it sees.
If you want to know where Google is headed, look through Google Lens, Google CEO Sundar Pichai
Pichai said in his founders’ letter a year ago that part of this shift to being an AI first company meant computing would be less device-centric. Lens is an example of being less device-centric, on mobile.
It enables you to do things such as point your phone at something, such as a specific flower, and then ask Google Assistant what the object you’re pointing at is. You’ll not only be told the answer, but you’ll get suggestions based on the object, like nearby florists, in the case of a flower.
Other examples of what Google Lens can do include being able to take a picture of the SSID sticker on the back of a Wi-Fi router, after which your phone will automatically connect to the Wi-Fi network without you needing to do anything else. Yep, no more crawling under the cupboard in order to read out the password whilst typing it in your phone. Now, with Google Lens, you can literally point and shoot.
With Google Lens, your smartphone camera won’t just see what you see, but will also understand what you see to help you take action. #io17 pic.twitter.com/viOmWFjqk1
— Google (@Google) May 17, 2017
Google Lens will recognise restaurants, clubs, cafes, and bars, too, presenting you with a pop-up window showing reviews, address details and opening times. It’s the ability to recognise everyday objects that’s impressive. It will recognise a hand and suggest the thumbs up emoji, which is a bit of fun, but point it at a drink, and it will try and figure out what it is.
What can Google Lens do?
Aside from the scenarios described above, Google recently updated Google Lens with the following capabilities:
If you’ve been waiting for Google Lens to come to more phones, you’ll be glad to know that day has finally arrived. At Google I/O 2018, Google announced that Google Lens is coming to a lot more phones, and the app is now available to download on Google Play.
References:
Townsend, T. (2017, May 19). Google Lens is Google’s future. Retrieved from https://www.recode.net/2017/5/19/15666704/google-lens-key-example-ai-first-computer-vision
Carman, A. (2018, March 06). Google Lens is coming to all Android phones running Google Photos. Retrieved from https://www.theverge.com/2018/3/6/17086688/google-lens-android-photos-launch-roll-out
Pocket-lint. (2018, June 06). Google Lens: What is it and how does it work? Retrieved from https://www.pocket-lint.com/apps/news/google/141075-what-is-google-lens-and-how-does-it-work-and-which-devices-have-it