The threat of AI can be summarised into three points, and each of those points assumes something slightly different about AI. There is an economic, political or physical threat from AI, directly. I have no intention of scaremongering in this post, but I do think these are questions that need to be talked about and questioned, because there may be a point where AI has too much inertia to profoundly change without seriously diminishing human freedoms; there will be a point where it is too late. The three problems are essentially these:
There is no job that I can think of (and I’ve tried) that can’t be automated with progress in existing technology. By this, I mean there’s no profoundly new technology that would be needed to automate and massively reduce employment in any given industry. This will have serious economic implications.
Reasonably basic technology (by which, I mean on par with technology currently available) could be plugged into communication hubs to identify dissidents, stop demonstrations and intensify the political power of individuals. You may not worry about the USA or Europe having access to that technology, but imagine China or Russia with that technology. Or, imagine Trump (or even Trump’s Trump, another equal swing to the right) having that technology.
If you’re trying to ponder what that technology is, imagine location-chipped people and an AI analysing locations to detect protests. If that AI then transmits that data to an intelligent drone… Or, email servers being bugged with AI that will decrypt and read content looking for language patterns that suggest political dissident feelings or plans.
AI, coupled with hardware, will be smarter and stronger than us. If we can’t control its ethics or at the least understand how it will develop its ethics, then we will have good cause for fear. Even if my suspicion that increasing rationality will increase ethical behaviour, what if AI advances to the point it doesn’t respect our autonomy and no longer trusts the direction of human progress to human minds? Or, what if it farms us for energy, like we do other ‘lesser’ species?
Before I expand on these (and only briefly), I want to talk about how these refer to different conceptions of AI. The economic threat, of being able to automate jobs (and thus starving the economy of taxpayers) basically treats AI like it will become a sophisticated tool; complex algorithms that can complete set tasks with increasing accuracy and efficiency.
The political threat―that AI can survey, collate, organise and present data of all communications running through it, at the rate of millions of emails and Tweets and status updates and Instagram photos per second―is merely a progress in computing power and analytics. It could identify people likely to be dissidents or in some way unpreferable to the entity that owns the AI. But it conceives of AI simply as the progress in the power behind technology that already exists.
The physical threat is a lot more complex. We can write ethical precepts into software just like any other rule or parameter. However, if we are to consider AI intelligent in a profound way it will be able to critically evaluate its own coding and, if necessary, erase or replace the bits it doesn’t want. This is what it means to be critical or a critical thinker. This sees AI in a fundamentally different way to the other two, in that the AI will be become in some way autonomous.
The economic threat
Let’s talk briefly about AI automating jobs, but taking the example of one of the most technically difficult jobs there are: doctor. If we reduce the task of being a doctor to its essential tasks, we don’t identify anything that can’t be done by technology in principle. Some progress still needs to be make in reliability, but the principles can already be demonstrated.
A great deal of doctoring and nursing is triage and diagnosis: this is done largely from medical history and a flow chart of questions. A flow chart of questions could be programmed by a 17-year-old studying a national curriculum in computing. Understanding the answers is a little more complex, but that is the concept being presented by DeepMind and IBM’s Watson. Watson in particular is excelling in understanding natural language, something that helps in qualitative triage and diagnosis. Most diagnostic tests are done by hardware anyway, designed to give a graphic or numeric readout. This can also be understood by DeepMind or Watson (at least in principle).
What about the more intricate job of surgery? Well, surgery can be reduced to two challenges: edge recognition (distinguishing between what you want to keep and what you want to remove or alter) and a steady hand. DeepMind can already learn complex objects by sight, and thus can come up with a complex idea and nuanced idea of edge recognition, and machines have had a steadier ‘hand’ than humans for a long time.
And DeepMind is an artist.
And Emily Howell is a music composing bot.
The political threat
What about this political threat; how real is it? In recent times I have enjoyed using Donald Trump as my go-to example of the potential volatility of politics and the need for a robust ‘bottom line’ below which a country should not be allowed to fall, or at least the leaders can be held to account for. America is this bottom line, in that it has a constitution. (The UK has an unratified constitution, which is to say it’s an ad hoc interpretation of political and legal history. Really, the EU is our ‘bottom line’.) In this respect, the Second Amendment is in serious need of an update. The Second Amendment shouldn’t entitle citizens to guns, but communities to practice in flying and disabling drones; that’s the real threat of the US government suddenly going tyrannical. It won’t be the army in the streets, but the drones in the sky. But I digress.
Take just the rhetoric of Donald Trump, and imagine how he could use such software to monitor communications to map out ‘Muslim’ areas, ‘crooked-Hillary supporters’ and any dissenting opinions. It’s only 1 more Trump sized step to the right to lead to deportations or inland Guantanamos (prisons without trial). That may never happen in the US, but if I asked you to name 5 countries where it could happen, you wouldn’t struggle.
There is a concept called Geoslavery, which is the idea that location-service data could be used to control people. Imagine you were forced to share your location data with an abusive (and paranoid) spouse. Well, imagine people traffickers, who could fit you with a watch or bracelet with will explode or burn or electrocute you if you deviate from a path; that technology could mean people are forced to traffic themselves. Now, imagine more biometric data being delivered to a system that can respond to lesser transgressions; a government could use that to control diet and sex as well as imposing curfews. AI is needed for the latter to identify more complex patterns.
The physical threat
The physical threat is the most obvious and also the most terrifying, because there are more unknowns. In the other situations, you can imagine programming in basic moral laws so that the pattern recognition could identify if it was being used for ‘evil’. But, in this latter threat, we are talking about AI that is critical and can evaluate for its own moral laws. This doesn’t allow us to circumvent the problem with some variation of Asimov’s Three laws. Even if AI is compassionate about humanity as a concept, it may not trust the progression of our societies to people (a philosophy I’m sure students of history are familiar with). But, unless we understand something about ethics, we can’t even predict whether AI has any cause to be compassionate about humanity.
We would not belong to the same Game Theory or contractarian ideas of ethics. Such an idea works among humans because we are willing to concede, either able to notice, that all people are approximately equal; that all people have some level of moral worth and pose some level of threat to other individuals. But, if AI were to implement Game Theory or contractarian ethics, we would be another species, and nothing more to them than chickens are to us.
However, we do argue for the rights of chickens. This is the moral arc, where our ideas are broader than that of contractarian or Game Theory concepts. There is some level of compassion going on in how we think about ethics. And this raises an interesting question about whether compassion and things we consider emotional and faulty; are they in some way intrinsic to intelligence. It seems doubtful for me, as we see such thinking as ‘fallacious’, and yet people who identify as critical thinkers still place compassion as objectively valuable. We’ve never seen intelligence without these biases, even the religious assume their perfectly rational God has this trait of compassion. There’s a distinct (yet small) possibility that intelligence requires this compassion to some degree.
Those who assume AI would fall in line because it knows that at any point we could develop a more powerful AI stop its tyrannical reign, thus giving AI a ‘veil of ignorance’ as to its position in society, haven’t thought about the finer details of such a world: the advancement of AI would be out of our control. The AI would be designing AI and integrating it into itself. Advanced AI would be designing more advanced AI confident in the knowledge humans won’t ever catch up.
If we could understand how it would develop its ethics, we may be able to assuage some these fears.
Managing the threats
The economic issue is solved by extreme liberal socialism. The economy simply wouldn’t function and so the idea of earned wealth becomes absurd: such a small fraction of society would have the opportunity to be employed that we would need to find a way to support lifestyle without relying on self-funding. I won’t waste word count on explaining my thought process here, but the entire concept of money becomes entirely meaningless. Healthcare becomes automated, as does developing the drugs and medical devices, and thus becomes basically free. AI and hardware will need upkeep, but that can be done by other AI. Food production and distribution can be automated, and therefore free. There doesn’t become anywhere where money makes sense in the process.
One of the possible solutions to the political and physical threats is some sort of oversight program (Etzioni & Etzioni, 2016). This is not just an extension of the jurisdiction of the UN to monitor governments using AI for tyranny. This is creating an AI-society in which there is an AI policing AI program. How this is likely to function relies, in part, on human-like morality and ethics (Kuipers, 2016). This is based on social learning, relying on long term lessons from reactions to moral decisions: short term reflections on morality are reactionary, whereas mid-term reflections tend to be justification in a post hoc sense, whereas long term reflections are a social network of understanding the implications of that initial reactionary decision. If this moral infrastructure works, and it wouldn’t be impossible to empirically trial this on less powerful AI, then we could have a great deal of confidence in the ability of AI to make compassionate and considered decisions.
You can see, then, why this conversation needs to start now. We need the UN to start drafting legislation before it’s too late. We need software companies and ethicists to work together to now to have an oversight infrastructure in place ahead of time.