Sam Harris’ The Moral Landscape (2011) is a thesis that claims morality is objective and knowable by scientific means. It is predicated on the initial premise that wellbeing is worthy of safeguarding, and thus intended actions can be objectively evaluated against that value. I have long agreed with this claim, with one subtle change: I don’t argue that wellbeing is worthy of safeguarding, but that behaviour with the intent of safeguarding wellbeing is what we mean by ‘morality’. The difference is interesting, as Harris’ claim is that wellbeing’s value is a part of the fabric of the universe, whereas my claim is that wellbeing is the basis for objectively describing human behaviour as right and wrong with regard to a particular goal.
I have recently been thinking that I should make my deviation from Harris’ thesis more apparent, especially as I use The Moral Landscape as the basic description of my thoughts on how morality operates. The difference is in a distinction that David Deutsch (2011) makes with his terms ‘Universally true’ and ‘parochially true’. Harris argues that the value of wellbeing, and thus descriptions of morality, are universally true. I, on the other hand, believe that the value of wellbeing, and thus morality itself, is parochially true, in that it is only true within a set context. This context, in this case, is human interaction with conscious creatures, both direct and indirect.
The reason for this distinction is as follows: I believe that anything which is universally true will be a claim that becomes accessible to sufficiently advanced technology. This is a very abstract claim, and relies a lot more on rationalism than on empiricism, but hear me out. Harris often uses the example of ‘what was John F Kennedy thinking in the moments before his assassination’ as a claim about reality that is true, but completely lost. I disagree that it is completely lost. Harris talks of advanced fMRI-like technology that could read a person’s mind at a distance. Couple this advanced technology with time travel, and we could know what was going on in Kennedy’s head. It is a truth that would avail itself to technology, thus it is ‘universally true’, which is to say it’s not bound by context.
Imagine, now, the technology that might be able to uncover the ‘truth’ (or not) of the value of wellbeing. I suspect the technology needed is sufficiently advanced AI. We have good defences of why the value of wellbeing is parochially true, in that it is defensibly true within the context of the concerns of conscious creatures. But, if it is universally true, then AI should also discover this truth ― and this truth would help to modulate its behaviour. (This, in turn, gives us good reason to be optimistic about AI in the future.)
However, I do not believe we have good reason to think AI could discover such a truth; instead, trying to imagine how AI would discover this reveals that the claim of the objective value of wellbeing reveals it to be a value claim, not a truth claim. This doesn’t affect my thesis, as I have been arguing that wellbeing is a needed asset to describe morality in the confined and parochial contexts.
This does lead to interesting questions about AI: will there be a critical point where advanced and general AI will simply stop working? At the moment, we can program ‘values’ and ‘purpose’ into AI, but advanced intelligence ― artificial or not ― will be able to evaluate these values and purposes, instead of being bound to them. Without an evolutionary legacy or ability to ‘enjoy’ itself in a conscious sense, will an advanced AI be paralysed by existential absence; no motives at all? But that’s a digression.
In summary, I think that thought experiments regarding the future of AI show the value of wellbeing as being immune from discovery from outside the context of consciousness, thus not being a universal truth about reality. (AI could throw me a curveball, I can’t pretend to understand AI properly.) This realisation reduces the truth of morality from the universal truth Harris argues from, to the parochial truth I have been discussing in parallel.