How Google Lens is preparing the world for an augmented reality takeover


Without commenting on specific plans, I think it is absolutely correct: It is no different than thinking about voice and speech transcription.

When I had Dragon NaturallySpeaking, the way it worked was that you had your huge Windows tower computer and a microphone that you talked to. And that’s cool, isn’t it? But where speech is actually used more and more is through an ambient microphone on your phone. So when I need to find out the weather, I don’t have to get up and go for a walk, I just say, “What’s the weather like?”

Just as I think about migrating speech recognition to other form factors, the thing we do is “give us a picture or a series of pictures or videos and we will help you understand”. The current manifestation of this is that people have these phones, but the technology we’ve developed goes beyond that.

With speakers like Google Home mainstreaming voice search, do you think the same goes for glasses, which take technology like Lens to the next level?

It’s an interesting question, isn’t it? Because I think that is absolutely clear to me for certain applications. No matter if 10 or 30 years old and we have these magical contact lenses, I’m in a shop, see a shoe, I like it and “Boom”. It is very Terminator 2. For these types of questions I can understand why glasses or something similar would be very useful.


Leave A Reply