3 catastrophic problems with Robotic AI, and 2 solutions

The threat of AI can be summarised into three points, and each of those points assumes something slightly different about AI. There is an economic, political or physical threat from AI, directly. I have no intention of scaremongering in this post, but I do think these are questions that need to be talked about and questioned, because there may be a point where AI has too much inertia to profoundly change without seriously diminishing human freedoms; there will be a point where it is too late. The three problems are essentially these:

  • Economic
    There is no job that I can think of (and I’ve tried) that can’t be automated with progress in existing technology. By this, I mean there’s no profoundly new technology that would be needed to automate and massively reduce employment in any given industry. This will have serious economic implications.
  • Political
    Reasonably basic technology (by which, I mean on par with technology currently available) could be plugged into communication hubs to identify dissidents, stop demonstrations and intensify the political power of individuals. You may not worry about the USA or Europe having access to that technology, but imagine China or Russia with that technology. Or, imagine Trump (or even Trump’s Trump, another equal swing to the right) having that technology.
    If you’re trying to ponder what that technology is, imagine location-chipped people and an AI analysing locations to detect protests. If that AI then transmits that data to an intelligent drone… Or, email servers being bugged with AI that will decrypt and read content looking for language patterns that suggest political dissident feelings or plans.
  • Physical
    AI, coupled with hardware, will be smarter and stronger than us. If we can’t control its ethics or at the least understand how it will develop its ethics, then we will have good cause for fear. Even if my suspicion that increasing rationality will increase ethical behaviour, what if AI advances to the point it doesn’t respect our autonomy and no longer trusts the direction of human progress to human minds? Or, what if it farms us for energy, like we do other ‘lesser’ species?

Before I expand on these (and only briefly), I want to talk about how these refer to different conceptions of AI. The economic threat, of being able to automate jobs (and thus starving the economy of taxpayers) basically treats AI like it will become a sophisticated tool; complex algorithms that can complete set tasks with increasing accuracy and efficiency.

The political  threat―that AI can survey, collate, organise and present data of all communications running through it, at the rate of millions of emails and Tweets and status updates and Instagram photos per second―is merely a progress in computing power and analytics. It could identify people likely to be dissidents or in some way unpreferable to the entity that owns the AI. But it conceives of AI simply as the progress in the power behind technology that already exists.

The physical threat is a lot more complex. We can write ethical precepts into software just like any other rule or parameter. However, if we are to consider AI intelligent in a profound way it will be able to critically evaluate its own coding and, if necessary, erase or replace the bits it doesn’t want. This is what it means to be critical or a critical thinker. This sees AI in a fundamentally different way to the other two, in that the AI will be become in some way autonomous.

The economic threat

Let’s talk briefly about AI automating jobs, but taking the example of one of the most technically difficult jobs there are: doctor. If we reduce the task of being a doctor to its essential tasks, we don’t identify anything that can’t be done by technology in principle. Some progress still needs to be make in reliability, but the principles can already be demonstrated.

A great deal of doctoring and nursing is triage and diagnosis: this is done largely from medical history and a flow chart of questions. A flow chart of questions could be programmed by a 17-year-old studying a national curriculum in computing. Understanding the answers is a little more complex, but that is the concept being presented by DeepMind and IBM’s Watson. Watson in particular is excelling in understanding natural language, something that helps in qualitative triage and diagnosis. Most diagnostic tests are done by hardware anyway, designed to give a graphic or numeric readout. This can also be understood by DeepMind or Watson (at least in principle).

What about the more intricate job of surgery? Well, surgery can be reduced to two challenges: edge recognition (distinguishing between what you want to keep and what you want to remove or alter) and a steady hand. DeepMind can already learn complex objects by sight, and thus can come up with a complex idea and nuanced idea of edge recognition, and machines have had a steadier ‘hand’ than humans for a long time.

And DeepMind is an artist.

And Emily Howell is a music composing bot.

The political threat

What about this political threat; how real is it? In recent times I have enjoyed using Donald Trump as my go-to example of the potential volatility of politics and the need for a robust ‘bottom line’ below which a country should not be allowed to fall, or at least the leaders can be held to account for. America is this bottom line, in that it has a constitution. (The UK has an unratified constitution, which is to say it’s an ad hoc interpretation of political and legal history. Really, the EU is our ‘bottom line’.) In this respect, the Second Amendment is in serious need of an update. The Second Amendment shouldn’t entitle citizens to guns, but communities to practice in flying and disabling drones; that’s the real threat of the US government suddenly going tyrannical. It won’t be the army in the streets, but the drones in the sky. But I digress.

Take just the rhetoric of Donald Trump, and imagine how he could use such software to monitor communications to map out ‘Muslim’ areas, ‘crooked-Hillary supporters’ and any dissenting opinions. It’s only 1 more Trump sized step to the right to lead to deportations or inland Guantanamos (prisons without trial). That may never happen in the US, but if I asked you to name 5 countries where it could happen, you wouldn’t struggle.

There is a concept called Geoslavery, which is the idea that location-service data could be used to control people. Imagine you were forced to share your location data with an abusive (and paranoid) spouse. Well, imagine people traffickers, who could fit you with a watch or bracelet with will explode or burn or electrocute you if you deviate from a path; that technology could mean people are forced to traffic themselves. Now, imagine more biometric data being delivered to a system that can respond to lesser transgressions; a government could use that to control diet and sex as well as imposing curfews. AI is needed for the latter to identify more complex patterns.

The physical threat

The physical threat is the most obvious and also the most terrifying, because there are more unknowns. In the other situations, you can imagine programming in basic moral laws so that the pattern recognition could identify if it was being used for ‘evil’. But, in this latter threat, we are talking about AI that is critical and can evaluate for its own moral laws. This doesn’t allow us to circumvent the problem with some variation of Asimov’s Three laws. Even if AI is compassionate about humanity as a concept, it may not trust the progression of our societies to people (a philosophy I’m sure students of history are familiar with). But, unless we understand something about ethics, we can’t even predict whether AI has any cause to be compassionate about humanity.

We would not belong to the same Game Theory or contractarian ideas of ethics. Such an idea works among humans because we are willing to concede, either able to notice, that all people are approximately equal; that all people have some level of moral worth and pose some level of threat to other individuals. But, if AI were to implement Game Theory or contractarian ethics, we would be another species, and nothing more to them than chickens are to us.

However, we do argue for the rights of chickens. This is the moral arc, where our ideas are broader than that of contractarian or Game Theory concepts. There is some level of compassion going on in how we think about ethics. And this raises an interesting question about whether compassion and things we consider emotional and faulty; are they in some way intrinsic to intelligence. It seems doubtful for me, as we see such thinking as ‘fallacious’, and yet people who identify as critical thinkers still place compassion as objectively valuable. We’ve never seen intelligence without these biases, even the religious assume their perfectly rational God has this trait of compassion. There’s a distinct (yet small) possibility that intelligence requires this compassion to some degree.

Those who assume AI would fall in line because it knows that at any point we could develop a more powerful AI stop its tyrannical reign, thus giving AI a ‘veil of ignorance’ as to its position in society, haven’t thought about the finer details of such a world: the advancement of AI would be out of our control. The AI would be designing AI and integrating it into itself. Advanced AI would be designing more advanced AI confident in the knowledge humans won’t ever catch up.

If we could understand how it would develop its ethics, we may be able to assuage some these fears.

Managing the threats

The economic issue is solved by extreme liberal socialism. The economy simply wouldn’t function and so the idea of earned wealth becomes absurd: such a small fraction of society would have the opportunity to be employed that we would need to find a way to support lifestyle without relying on self-funding. I won’t waste word count on explaining my thought process here, but the entire concept of money becomes entirely meaningless. Healthcare becomes automated, as does developing the drugs and medical devices, and thus becomes basically free. AI and hardware will need upkeep, but that can be done by other AI. Food production and distribution can be automated, and therefore free. There doesn’t become anywhere where money makes sense in the process.

One of the possible solutions to the political and physical threats is some sort of oversight program (Etzioni & Etzioni, 2016). This is not just an extension of the jurisdiction of the UN to monitor governments using AI for tyranny. This is creating an AI-society in which there is an AI policing AI program. How this is likely to function relies, in part, on human-like morality and ethics (Kuipers, 2016). This is based on social learning, relying on long term lessons from reactions to moral decisions: short term reflections on morality are reactionary, whereas mid-term reflections tend to be justification in a post hoc sense, whereas long term reflections are a social network of understanding the implications of that initial reactionary decision. If this moral infrastructure works, and it wouldn’t be impossible to empirically trial this on less powerful AI, then we could have a great deal of confidence in the ability of AI to make compassionate and considered decisions.

You can see, then, why this conversation needs to start now. We need the UN to start drafting legislation before it’s too late. We need software companies and ethicists to work together to now to have an oversight infrastructure in place ahead of time.

Etzioni, A. & Etzioni, O. (2016) AI assisted ethics. Ethics and information technology. 18 (2), pp. 149–156. [Accessed 17 June 2016].

Kuipers, B. (2016) Human-like morality and ethics for robots [online]. AAAI-16 Workshop on AI, Ethics and Society.

 

21 thoughts on “3 catastrophic problems with Robotic AI, and 2 solutions”

    1. Indeed.
      It’s good to see some companies volunteering to get to together and discuss these issues well ahead of time. Facebook and Google are on it.

  1. This article is stupendous in that it illustrates the luddite, static, unimaginative nature of atheist and leftist mind meandering (I shudder to use the word “thought” or “thinking” when referring to atheist and leftist mental gyrations).

    1. There is absolutely no economic threat from AI.
    For if AI automates all the jobs, they will be producing and creating all the wealth. That means every human being will be able to live a life of complete, luxurious leisure.

    2. There is no political threat from AI. That is because advancements in technology lend the greatest favor to the plebe. This has already been proven by history, especially European and American history. Tyrants who try to harness advanced technology for world domination inevitably destroy themselves and their own societies.

    3. There will be no physical threats posed by AI. However the threat caused by deranged, power hungry human beings wielding AI is the same threat that has always inflicted itself upon mankind.

    AI will have no freewill.

    It will perform in those areas it has been designed and manufactured to perform in.

    The greatest problem with AI as with every technology will be misuse and malfunction.

    And it is the purpose of engineering and quality control to reduce or greatly eliminate malfunctions.

    With regard to misuse of AI, it’s deja vu all over again.

    For as with all technology, AI will amplify man’s own ability to create or destroy.

    Man is the problem, not technology.

      1. Who are your informed sources, by contrast?
        (I love how there’s always a conspiracy to explain the discrepancy between the ‘expert’ view and your own.)

      2. Allallt,

        That you have to change the subject to religious bigotry and label an obvious fact, a conspiracy proves my point that atheists are incapable of rational thought.

        You just make up everything as you go and put negative labels on anything that does not suit your fancy.

        I have been studying this topic for decades and don’t need a bunch of money grubbing “experts” to tell me how and what to think.

      3. Ooh, I love this game. When you list derogatory terms, it’s not “bigotry”, but when I simply give their pair words back, it is.
        And apparently I put negative labels on everything.
        And apparently you’ve done your research without calling on experts and you don’t need to.

        Good game. Well played. Shame it’s so transparent.

      4. Allallt,

        What I expressed here is neither right wing nor religious in nature.

        However, the argument you make here is the same one atheists and leftists make for proven hoaxes like global warming (aka climate change which is a truly idiotic name since change is what climate does naturally), or God not existing.

        In fact, the next time you want to argue against THE Donald, for example, just copy and paste this entire post and then substitute “THE Donald” for “AI.”

        The fact is, that fear of some unproven future catastrophe is THE argument leftists and atheists make about everything.

        Grow up. Get a mind of your own. Get it educated instead of indoctrinated. Train your mind to think.

        Then instead of playing games with yourself, you might be able to gain some genuine insight into the world around you.

      5. Counterpoints:
        (1) I am not arguing against AI. I love the idea of AI. I’m arguing for making sure it’s carefully managed. This isn’t a fear argument at all. Even the title of the post suggests solutions, not the entire cessation of the concept.
        (2) You really ruin your credibility when you label the experts as ‘leftist’ or ‘atheist’ to demonise them or argue against them.
        (3) You really ruin your credibility when you don’t present a counter argument. Because what you do is label the facts you don’t like ‘leftist’ and continue believing exactly what you want to believe, immune from concepts like evidence.
        (4) I didn’t accuse you of making religious or conservative arguments. I simply pointed out that if you’re going to discard experts for being ‘liberal’ or ‘atheist’, but still have a strong view about it, there must be someone you’re taking your information from, and the antonyms for what you reject are, in fact, ‘rightist’ and ‘religious’. Or do you trust your own mind, singularly, to understand the politics and economics so completely soundly, that you’re happy being without the input of others… or facts?

        I’ve been pointing this out for months, and you’ve been doing on my blog for years — simply labelling the facts you don’t like as ‘leftist’ is not a rational argument.

        And because you’re criticisms have no substance to them, I can’t even defend my position. You give me no content to actually discuss. It’s just this vague amorphous label, followed by the unsubstantiated promise that you’re being completely logical and rational. And when pressed for an explanation, you simply repeat your previous unsubstantial, unsubstantiated, unsourced, unreasoned assertions.

        For some reason, you’ve brought climate change and Donald Drumpf into it, like what you desperately need in this conversation is a distraction. Like you know you’ll lose if anyone can ever actually pin your down to a specific point.

      6. Allallt,

        How can you argue anything about AI when you haven’t the slightest idea what it is?

        You’ve swallowed a bunch of excrement fed to you by the “experts” whose only interest is getting fools to part with their money.

        Progressivism, a political philosophy begun by Americans (Woodrow Wilson, Frank Goodnow) has as one of its fundament precept, the rule by experts who are supposedly politically uninterested.

        The result is tyranny by and for the stupid and uninformed.

        That’s because people outsource their God-given brains to the government which is supposedly run by “the experts.”

      7. Your post is my logical and rational defense of these assertions.

        If you don’t know anything about the Progressivism (why would you?) go out and get yourself educated.

  2. A huge subject, and a great first step. I want to hear more about “extreme liberal socialism,” as I’m terrified of the economic impact of AI. Brazil, for example, doesn’t have a very well developed service economy like Australia, so the effects in a country like this will be immediate and massive. Delivery drones will put millions of low-educated workers out of work, and I just can’t see where these people could possibly go to find alternative employment.

    1. The end game, as I see it, is that we measure wealth in resources, not money. And then, we automate the distribution of those resources. There isn’t a wealthy person to tax, because wealth isn’t personally accrued. It is simply mined/processed/grown and then distributed by an automated system. If I were braver, I’d have called it communism.

      But it’s communism in the complete opposite economic structure.

      I worry about the intermediate steps, where people still want money and they want their efforts to be recognised and remunerated with money, as they manage the bots, all the while employment is plummeting.

  3. > There is a concept called Geoslavery, which is the idea that location-service data could be used to control people. Imagine you were forced to share your location data with an abusive (and paranoid) spouse.

    Instead, imagine you choose to share your location data with everyone (“public”) and an abusive (and paranoid) law-enforcement or security service. Ah, it’s already been happening.

    “US start-up Geofeedia ‘allowed police to track protesters'”:
    http://www.bbc.co.uk/news/world-us-canada-37627086

    And here’s a link to the ACLU original article:
    https://www.aclu.org/blog/free-future/facebook-instagram-and-twitter-provided-data-access-surveillance-product-marketed

    And a quote: “… we are concerned about a lack of robust or properly enforced anti-surveillance policies. Neither Facebook nor Instagram has a public policy specifically prohibiting developers from exploiting user data for surveillance purposes. Twitter does have a “longstanding rule” prohibiting the sale of user data for surveillance as well as a Developer Policy that bans the use of Twitter data “to investigate, track or surveil Twitter users.” Publicly available policies like these need to exist and be robustly enforced.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s