Sci-fi movie Upgrade centres around the relationship between a quadriplegic man and the artificial intelligence (AI) that physically controls his body. It’s explores the distrust between the protagonist and AI as a whole, with warnings of what occurs when humanity trusts and relies on AI completely.
Therein lies the problem with today’s AI efforts. No matter how much time and money is invested in AI, if the public do not trust it then it’s not going to be used.
As Microsoft Senior Communications Director Andrew Pickup explains, “The adoption of technology is all about trust. Building that up and communicating it is absolutely crucial.”
The perception of evil AI
Organisations face an uphill battle. Consumers have been fed storylines about evil AI since the 1927 movie Metropolis, where a robot causes havoc and murder across the city. This hasn’t been helped with modern-day AI fails like Microsoft’s Tay becoming a Nazi-loving incestuous sex promoter within 24 hours of joining Twitter, or Google’s algorithm mislabelling black people as gorillas.
More seriously, there’s been high-profile cases where AI’s decision-making has been questioned or even led to death. In March 2018 the tech industry was rocked by a fatal collision between an autonomous Uber and a female pedestrian. Such cases stoke the existing fears in many people, and the media are quick to report on any AI failure.
AI is currently limited - and glitchy
But AI technology is still in its relative infancy. Artificial general intelligence (the kind of AI that can perform any human task) is a long way off.
AI is currently limited to a few specific tasks. Cleaning a database, for example, or recognising voice and images. Because it is still under development there’s a lot of potential for glitches, and this is something that the public need to be educated about.
Education needs to improve
Indeed, Holmes Report found that 53% of consumers believe that education about AI in society needs to improve, with 61% suggesting that the responsibility for this should be shared across business, academia and Government.
An increased awareness of how AI works and its current limitations could go some way in reducing some of the public concerns around its use.
AI could explain its decisions
Having an AI explain some of its reasoning and decision making could also help. However, this is much easier said than done, as AI is now so complex even its creators cannot fully understand why it comes to some decisions.
This is an issue that researchers are trying to tackle. Jason Yosinkski, AI researcher at Uber, recently told Quartz that, “In order for machine learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.”
Yet, even if an AI could explain why it comes to a decision, its reasoning could be beyond the rationale of most people.
IBM’s Watson for Oncology was designed to recommend treatments for 12 different types of cancer. However, when it suggested a course that differed from a human doctor’s opinion, it was unable to fully explain the reasons behind this suggestion. The algorithms were simply too complex for doctors to understand, causing them to go with their gut instinct instead and to distrust the AI.
Then there’s the much-publicised game of AlphaGo between an AI and a world-leading player Lee Sedol. The 37th move in the second of a five-match competition shook Go experts across the globe. It was an unexpected move that a human had a one-in-ten-thousand chance of playing. Indeed, before the AI played it, move 37 had never been seen before. The AI’s creators suggested that the move came about because the AI wasn’t just trained by playing against humans, but also itself. Thousands of times.
The issue with algorithms creating algorithms
Which causes another potential trust issue. The idea of AI creating its own code has some experts worried. When algorithms evolve and create new code, it can cause a mess of algorithms all jostling for supremacy. That can cause unpredictability, with no human fully able to unpick what’s happened if something goes wrong.
A need for more regulation
Regulation could play a part in building trust. Many consumers already feel that AI requires stricter regulation and restrictions. This could, in part, be fuelled by calls from tech leaders for greater regulation. Lobbyists include Tesla CEO Elon Musk and 116 experts who called for the UN to ban autonomous weapons.
Of course, there’s the thorny issue of accountability for an AI’s actions. Who should be held responsible if an AI causes a death, as in the case of Uber’s self-driving car? Plus there’s the added difficulty of deciding what ethics (if any) should be coded into an AI.
Imagine that an autonomous vehicle is coded with its own version of the trolley problem and it has to swerve to avoid an oncoming car. However, if it swerves right it will hit a schoolchild and if it goes left it will collide with an old lady. What should the AI do?
Concerns around automation and jobs
Closer to home is the concerns (and ethics) surrounding potential job losses because of AI. This makes it tricky when introducing AI into the workplace, as employees’ imaginations might immediately jump to mass redundancy.
Recent estimates place potential losses around 66 million in OECD countries, less than originally envisioned. Plus, AI is likely to create several new job roles as it develops. The onus is on organisations to communicate this with their employees and address these fears.
Address concerns now to prevent future issues
There’s clearly a great deal to address when it comes to public concerns surrounding AI. But it’s a job that has to be done as AI cannot move forward without the wider buy-in of society. AI needs to interact with humans and their data in order to learn. If the public don’t trust AI, they won’t use it and that will stymie its growth.
Just as organisations are investing millions in the development of AI, they must put some resources aside for its education. The public perception of AI is crucial for its future. It cannot be left to Hollywood to create the narrative.
Photo by Andy Kelly on Unsplash.