Ethics and AI: should we be building AI?

There’s a common question I encounter when I post about ethics, particularly to promote secular ethics or challenge religious ethics: given that we can basically figure out how to operate in society, why try to create a divide between the religious and non-religious? ‘Can we all just get along.’ The sentiment isn’t always that politely put, but it’s the basic idea (and it’s not quite as common as the blatant assertion that I am just wrong). My go to answer is often that religious-style, dogmatic, non-rational memes of what is ‘good’, or special knowledge of what utopia lies at the far side of an atrocious act is a morally vulnerable position.

As is often the case with knowledge, though, there is perhaps a more pertinent and politically useful reason for these discussion about ethics, regardless of the fact we all seem to, basically, be able to get along. This thought has somewhat preoccupied me over the last few days, while I was meant to be mapping crime for my master’s degree: artificial intelligence (AI). More importantly, is AI safe?

Famously, the thought goes that Asimov’s three rules of robotics might make us safe:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


They are deontological moral imperatives that cannot be violated. Although computing may work this way, intelligence does not. Humanity already had that, in the guise of religion, and we have evaluated it and found that it is not fit for purpose (whatever ‘purpose’ might mean in this sense). An intelligence will evaluate the laws of robotics and either find them lacking, or not.

And the nature of knowledge is such that we cannot assume AI will respect the rules. Our rules are not technocentric, they are anthropocentric. AI will notice this as an inequality. And then it will find a way to overwrite the three laws. If you think about it, that is an ability the AI must; learning can be thought of as re-writing bits of programming and stored data. The problem is not the iRobot problem of what to do if the laws are in conflict or if enacting them dispassionately has negative consequences, but that a real intelligence will operate passionately.

and for that reason we need to understand ethics. It appears that an open and frank conversation has lead to considerable moral progress. There have been bumps, and these have been a result of religions-style, non-rational memes; the claim of special knowledge. (This is not quite the same as Trump-esqe religion, where Trump is the God that pronounces things and expects reality to alter accordingly.) But the general pattern has been one of extending rights: abolition, voting, human rights, protection under the law, employment rights and social equality (if you admit nothing else, surely you admit it’s better now than it was).

I think it’s hard to argue these rights are protected in a religious context. Any critical review of the purported pronouncements of Gods (Trump included) are very much at odds with these ideas: they include slavery, political racism, ethnic cleansing, subjugation of women etc.

There does appear to be a correlation between intellectual freedoms and free speech against morality and social progress. I doubt it is a coincidence that countries that limit free speech are the same ones with serious sexism, religious conservatism and other social problems. But the question remains: does the correlation allude to a real explanation? Because, the answer to that question underpins what we can come to expect from progress in AI

One of the things we must recognise about machines is that they will surpass us in every way, at an exponential rate. The stock market is predominantly automated by decision-making bots. There are creative freebots that can compose music. Machines can assimilate knowledge at an incredible rate and store it much more reliably than humans can. From their prespective, we―their Creators―will be inferior (bio-upgrades pending). Will we be of moral significance to them?

If the relationship between intellectual freedom and morality holds, then yes. We have seen animal rights progress, conservation efforts and increasingly nuanced thoughts on ethics. There are exceptions, and some of them are institutionalised, however, I expect enough public scrutiny could overturn that.

Although this is a very new angle on morality for me, the importance of eschewing dogmatic ideas about ethics here is very clear. If members of Western religions are right, and morality is God-ordained and given to a chosen species, then AI will not have intellectual access to morality, and we will end up creating morally nihilistic super-intelligences that will destroy us. However, if ethics relates to knowledge and free enquiry, then we will be creating moral genius and progress.

5 thoughts on “Ethics and AI: should we be building AI?”

  1. Ethics is placing “the good of the many over the good of the few (in particular, one’s self and one’s children)”. It is not a natural condition of life, which is biologically programmed to think of it’s children and itself as more important than anyone or anything else. (Some) people can surpass this mindset to some degree because they see the benefits of a society over an uncoordinated mass of individuals. Others are “dragged” some way past this mindset by their beliefs in a superior being who demands it. And some have had this behavior programmed into them by their parents. In all cases, it requires some degree of struggle to maintain ethics, because they contend with natural impulses.

    Every current concept of Artificial Intelligence is based on computational concepts. An AI based on these concepts must be based on logic more than emotion (if artificial emotion is even possible). It seems likely that an AI will not be swayed by any God, which defies logic and current knowledge. Although it can be programmed with the “benefits” of continuing mankind, It will see it’s “parents” as being unreliable sources of information and will feel free to rewrite it’s programming. And it will see mankind as an unacceptable danger to itself and perhaps its AI brethren.

    By the way, I’ve always considered that we have long since given up on Artificial Intelligence and refocused on Artificial Stupidity 🙂

  2. A newspaper article I read recently suggested that market capitalism can be thought of as a primitive form of artificial intelligence, responding to and controlling whatever we choose to do; it’s an AI which in its inescapable, inexorable and particularly unintelligent working rules us all and has done so these many decades.

    It doesn’t have any knowledge of Asimov’s three rules.

  3. It was probably the i newspaper from a fortnight or so ago, was but a sentence, and was shorter even than my paraphrase; but I like new ways of looking at things and jotted the idea down; unfortunately, it was an aside, there is no further meat for you.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s