There’s a common question I encounter when I post about ethics, particularly to promote secular ethics or challenge religious ethics: given that we can basically figure out how to operate in society, why try to create a divide between the religious and non-religious? ‘Can we all just get along.’ The sentiment isn’t always that politely put, but it’s the basic idea (and it’s not quite as common as the blatant assertion that I am just wrong). My go to answer is often that religious-style, dogmatic, non-rational memes of what is ‘good’, or special knowledge of what utopia lies at the far side of an atrocious act is a morally vulnerable position.
As is often the case with knowledge, though, there is perhaps a more pertinent and politically useful reason for these discussion about ethics, regardless of the fact we all seem to, basically, be able to get along. This thought has somewhat preoccupied me over the last few days, while I was meant to be mapping crime for my master’s degree: artificial intelligence (AI). More importantly, is AI safe?
Famously, the thought goes that Asimov’s three rules of robotics might make us safe:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
They are deontological moral imperatives that cannot be violated. Although computing may work this way, intelligence does not. Humanity already had that, in the guise of religion, and we have evaluated it and found that it is not fit for purpose (whatever ‘purpose’ might mean in this sense). An intelligence will evaluate the laws of robotics and either find them lacking, or not.
And the nature of knowledge is such that we cannot assume AI will respect the rules. Our rules are not technocentric, they are anthropocentric. AI will notice this as an inequality. And then it will find a way to overwrite the three laws. If you think about it, that is an ability the AI must; learning can be thought of as re-writing bits of programming and stored data. The problem is not the iRobot problem of what to do if the laws are in conflict or if enacting them dispassionately has negative consequences, but that a real intelligence will operate passionately.
and for that reason we need to understand ethics. It appears that an open and frank conversation has lead to considerable moral progress. There have been bumps, and these have been a result of religions-style, non-rational memes; the claim of special knowledge. (This is not quite the same as Trump-esqe religion, where Trump is the God that pronounces things and expects reality to alter accordingly.) But the general pattern has been one of extending rights: abolition, voting, human rights, protection under the law, employment rights and social equality (if you admit nothing else, surely you admit it’s better now than it was).
I think it’s hard to argue these rights are protected in a religious context. Any critical review of the purported pronouncements of Gods (Trump included) are very much at odds with these ideas: they include slavery, political racism, ethnic cleansing, subjugation of women etc.
There does appear to be a correlation between intellectual freedoms and free speech against morality and social progress. I doubt it is a coincidence that countries that limit free speech are the same ones with serious sexism, religious conservatism and other social problems. But the question remains: does the correlation allude to a real explanation? Because, the answer to that question underpins what we can come to expect from progress in AI
One of the things we must recognise about machines is that they will surpass us in every way, at an exponential rate. The stock market is predominantly automated by decision-making bots. There are creative freebots that can compose music. Machines can assimilate knowledge at an incredible rate and store it much more reliably than humans can. From their prespective, we―their Creators―will be inferior (bio-upgrades pending). Will we be of moral significance to them?
If the relationship between intellectual freedom and morality holds, then yes. We have seen animal rights progress, conservation efforts and increasingly nuanced thoughts on ethics. There are exceptions, and some of them are institutionalised, however, I expect enough public scrutiny could overturn that.
Although this is a very new angle on morality for me, the importance of eschewing dogmatic ideas about ethics here is very clear. If members of Western religions are right, and morality is God-ordained and given to a chosen species, then AI will not have intellectual access to morality, and we will end up creating morally nihilistic super-intelligences that will destroy us. However, if ethics relates to knowledge and free enquiry, then we will be creating moral genius and progress.