Take a couple of minutes to watch the above video... and listen carefully.
This latest development by Google Assistant is uncanny at best, unnerving at its worst. This is especially true if you think about the possibility that the human at the other side never really recognized that her caller was not even human.
Google Assistant had its own share of milestones and developments as it got more and more advanced over the first two years of its public debut. But with this specific announcement, the gates have sprung wide open to its possible eventuality as a super advanced natural language AI.
But is such future even conceivable though? Let's start by taking a look at this AI assistant's native origin.
Humble beginnings, great milestones
Google Assistant was the culmination of the development of voice recognition-based artificial intelligence technologies, which Google has honed through the years via its predecessor, Google Now.
Unveiled last 2016, it wowed audiences to the very wide applications that it can comparatively do. It also gave potential users a glimpse of the revolution of access it presents on mobile devices.
More than two years later, the push to make Google Assistant the ubiquitous AI-enhanced smart command entity has become more aggressive.
As a natural language UI, it can now understand - or at least have an accurate approximation of - anything that that any human would normally say in their native language. Google Home, its cylindrical home-optimized iteration, also adds a new lineup to the number of devices you can use it with.
Because it is powered by the unimaginable amounts of analyzed data from deep learning neural networks, it can also perform tasks that can be outright impossible to similarly designed AI just a few years ago.
You can for example, ask Google Assistant to search for pictures related to a particular common element (e.g. pictures with dogs, pictures with specific objects), and it can find it accurately by simply understanding your words.
This is why it is quite easy, or at least considerably convincing, to see why Google Assistant holds such potential for the future of AI assistant technologies. It taps into one of the most intuitive and easy-to-use interfaces, and evolves its own system in order to become increasingly human with its responses.
Simulated communication: the leap forward
Let’s assume that we can accurately extrapolate Google Assistant’s development roadmap from this point on. In an ideal setting, it could soon become a sort of AI assistant that is reminiscent of Iron Man’s 'J.A.R.V.I.S' access interface.
Google’s 2018 keynote presentation showed that this idea has become partial reality. The flow, the semantics, and the pronunciation make it sound convincingly, and perhaps also scarily, human.
The demonstration of the new types of voices it can now do adds even more to this already astonishing feat. If you ever grow tired of Google Assistant’s default female voice (which, by the way, could well be also artificially generated, since it was never publicly disclosed), you now have an additional six more distinct voices to choose from.
From this achievement alone, what we can predict in the next few years is that Google Assistant will increasingly be involved with human interaction at a casual level. That is, people will start interfacing with computers more and more using natural language, and in any spoken language that is widely available on such platform.
Not as picture perfect, but very close
Despite this incredible progress, Google Assistant is still not as perfect as we would like it to be. Now, it is obvious that it is not yet capable of simulating human-level AI communication like Westworld’s colorful NPCs. But the problems are actually more fundamental: features that should have been available are strangely still missing.
For instance, let’s pick up Google Assistant’s feature to reinforce commands by automatically providing related action suggestions. As complex as its suggestions to check up on the requested information further are, the suggestions themselves are usually still very straightforward.
Two of the default suggestions when asking Google Assistant to read emails (e.g. “OK Google, read my mails”) are either to reply to the email, where you could dictate the text via voice, or it can vocalize the text of the email you specified.
Unfortunately, this does not include the fundamental action of attaching files.
This is perfectly understandable from a limitation standpoint. However, you would think that such a basic feature would have been available to such a powerful AI assistant right at its inception.
Then there is the infamous issue of third party smart device incompatibilities. Alexa, which is the main AI assistant of Amazon’s Echo smart speaker, is supposedly much weaker in terms of data utilization. However, due to its intended design as a setup device, it is so much more compatible with many more smart devices, unlike the supposedly cutting-edge Google Assistant-powered Google Home smart speaker.
These problems of course, can certainly be ironed out in the future as it developers further and further. But for now, we can perceive these as obstacles that can potentially hamper Google Assistant’s development speed as a super advanced AI assistant.
So long, Turing test
Ultimately, as Google Assistant becomes the J.A.R.V.I.S of our reality, it will soon pervade each and every Google device-related home and office setting. By that time, it is no longer just a convenient gimmick, but a necessity, a tool to be used by default on any occasion.
If it ever gets that far in the next few decades, we might reasonably predict that it will finally render the Turing test obsolete. Google Assistant could become the artificial entity so advanced it could talk to us like any human, but would not even realize that it is a separate entity itself. It shall never perhaps become the theoretical general AI of every futurist’s most ideal dream.
That evolution would most likely come, to whatever AI assistant will be developed for it next.
Photo by Paweł Czerwiński on Unsplash.