Against the Moral Landscape: AI and universal truth

Sam Harris’ The Moral Landscape (2011) is a thesis that claims morality is objective and knowable by scientific means. It is predicated on the initial premise that wellbeing is worthy of safeguarding, and thus intended actions can be objectively evaluated against that value. I have long agreed with this claim, with one subtle change: I don’t argue that wellbeing is worthy of safeguarding, but that behaviour with the intent of safeguarding wellbeing is what we mean by ‘morality’. The difference is interesting, as Harris’ claim is that wellbeing’s value is a part of the fabric of the universe, whereas my claim is that wellbeing is the basis for objectively describing human behaviour as right and wrong with regard to a particular goal.

I have recently been thinking that I should make my deviation from Harris’ thesis more apparent, especially as I use The Moral Landscape as the basic description of my thoughts on how morality operates. The difference is in a distinction that David Deutsch (2011) makes with his terms ‘Universally true’ and ‘parochially true’. Harris argues that the value of wellbeing, and thus descriptions of morality, are universally true. I, on the other hand, believe that the value of wellbeing, and thus morality itself, is parochially true, in that it is only true within a set context. This context, in this case, is human interaction with conscious creatures, both direct and indirect.

The reason for this distinction is as follows: I believe that anything which is universally true will be a claim that becomes accessible to sufficiently advanced technology. This is a very abstract claim, and relies a lot more on rationalism than on empiricism, but hear me out. Harris often uses the example of ‘what was John F Kennedy thinking in the moments before his assassination’ as a claim about reality that is true, but completely lost. I disagree that it is completely lost. Harris talks of advanced fMRI-like technology that could read a person’s mind at a distance. Couple this advanced technology with time travel, and we could know what was going on in Kennedy’s head. It is a truth that would avail itself to technology, thus it is ‘universally true’, which is to say it’s not bound by context.

Imagine, now, the technology that might be able to uncover the ‘truth’ (or not) of the value of wellbeing. I suspect the technology needed is sufficiently advanced AI. We have good defences of why the value of wellbeing is parochially true, in that it is defensibly true within the context of the concerns of conscious creatures. But, if it is universally true, then AI should also discover this truth ― and this truth would help to modulate its behaviour. (This, in turn, gives us good reason to be optimistic about AI in the future.)

However, I do not believe we have good reason to think AI could discover such a truth; instead, trying to imagine how AI would discover this reveals that the claim of the objective value of wellbeing reveals it to be a value claim, not a truth claim. This doesn’t affect my thesis, as I have been arguing that wellbeing is a needed asset to describe morality in the confined and parochial contexts.

This does lead to interesting questions about AI: will there be a critical point where advanced and general AI will simply stop working? At the moment, we can program ‘values’ and ‘purpose’ into AI, but advanced intelligence ― artificial or not ― will be able to evaluate these values and purposes, instead of being bound to them. Without an evolutionary legacy or ability to ‘enjoy’ itself in a conscious sense, will an advanced AI be paralysed by existential absence; no motives at all? But that’s a digression.

In summary, I think that thought experiments regarding the future of AI show the value of wellbeing as being immune from discovery from outside the context of consciousness, thus not being a universal truth about reality. (AI could throw me a curveball, I can’t pretend to understand AI properly.) This realisation reduces the truth of morality from the universal truth Harris argues from, to the parochial truth I have been discussing in parallel.

Deutsch, D. (2011) The Beginning of Infinity: Explanations That Transform the World. (no place) Penguin.

Harris, S. (2011) The moral landscape: How science can determine human values. (no place) Simon and Schuster.

Advertisements

21 thoughts on “Against the Moral Landscape: AI and universal truth”

  1. I too support Harris’s conjecture that morality is something we can define without religion, but when you say “Sam Harris’ The Moral Landscape (2011) is a thesis that claims morality is objective and knowable by scientific means.” I believe you are mis-stating his thesis. I think Harris claimed that this is worth exploring and did his best to make a start. I do not think he would use “is” as if he had conclusive arguments.

    Also, I am a philosophy bug and have been for 50 years but I am becoming less enthralled with the desire to define everything exactly (even though, as a scientist, I know the worth of those definitions). At this stage in my understanding I am a big fan of working definitions, definitions that aren’t claimed to be exact but are “close enough” until we find a better one. I put Harris’s thesis on morals in this category and also free will. (I think we would all be a lot better off if we established that free will was just something that was assumed for the purposes of social and criminal justice actions, which would leave us to argue its merits and existence ad nauseum without affecting anything real.

    Plus, I must admit when I see an email indicating another of your posts is available, I fee a frisson of pleasure as I know I will never be bored by why you write. I enjoy your posts a great deal.

    1. “I think we would all be a lot better off if we established that free will was just something that was assumed for the purposes of social and criminal justice actions…”

      That is a fascinating statement. Do you mean that criminals don’t really have a choice but we should punish them as if they do?

        1. Your answer is essentially, “Yes. People can’t help what they do but we need to protect society so lock-em up or fry-em.”

          You said: “This is something I need to be very clear about: even if we do not have freewill our experiences are still paramount and valuable and worth protecting.”
          I know better than to assume this is just your personal opinion so I would love to see the empirical evidence that makes this statement true.

        2. If you get a bit of free time can I recommend an online course in philosophy and one is psychology. Crash Course, on YouTube, is a reasonable starting position.

          I’ve never claimed to have a wholly empirical view of the world, but I can identify empirical questions compared to nonempirical questions. (And that’s a provisional recognition; I’m willing to be convinced otherwise.)

          In terms of what you’re looking for as an answer, you need to look at how value is created. I recommend looking for it.

          From a political view point, we build the societies we want, no? So protecting our wellbeing is still the goal. Cooperation is better than isolationalism, yes? So, working as a society is better than not.

          This point is a political one, not a moral one.

        3. Ha!
          If you get a bit of free time can I recommend an online theology course and one in Christian ethics?

    2. Firstly, thank you very much for your kind words. I’m glad you enjoy my posts!
      I think you may be right on several points there, I perhaps did overstate the scope of the book, it was more of a perfunctory investigation than a thesis. Although, he did hold an essay writing contest for those who object, and claimed not to be swayed by the winner, so it’s hard to say how confident he is.
      That said, when investigating new ground, a definition that points you in the right direct is a good start. And that’s about as far as we have got with the idea of wellbeing. Neuroscience research is still needed to attempt to actually see if there is reliable fidelity between brain states and reported wellbeing, as well as seeing if some kind of meaningful ‘dictionary’ can be made. So, a definition that points in a direct is what you have to work with.
      There are two versions of wellbeing in this model though. One is the subjective first hand experience a subject feels and describes, the other is a detailed picture of the brain. It certainly would be interesting to see if any fidelity could be found. I can think of a number of practical obstacles, and I’m not well versed in the discipline.
      I think admitting people are not responsible for their actions might lead us to make the criminal justice system more rehabilitative than retributive, which I think is the right goal to have.

      1. People are not responsible for their actions? Doesn’t that lower us to the realm of animals? How is “rehabilitation” even possible. Or justified?

        1. If we’re going to work within that paradigm, what has “justified” got to do with anything?
          Don’t get me wrong, I don’t fully agree with the paradigm we’re painting here. But from within that…

        2. If a person is “destined” to commit 17 burglaries, and he is caught after 5, how can we say we have any “justification” to attempt to reform him? It obviously is not going to do any good; 12 more burglaries are going to happen any way. And it would be kind of snotty of us to try to “cheat destiny”.

          I don’t buy it. Every person is responsible for any deliberate and some “accidental” actions, and if an action harms someone else, restitution needs to be made, punishment needs to be applied, and reasonable efforts to prevent that person (and others) from doing that action again should be attempted.

        3. I don’t think the model you’re talking about is the same as determinism. This determinism is mechanical, not authored. To talk as if 17 burglaries are authored is to miss the point.

  2. The first problem is, what is “wellbeing”. I have sort of an intuitive grasp, but in order to use the term for anything, we have to be precise. The textbook definition seems to usually include “a good or satisfactory condition of existence” and sometimes includes happiness and/or success. Who defines what is good or satisfactory? And happiness is completely subjective, while success requires intelligence, hard work and/or luck, none of which are or can be made, universal.

    I can guarantee you that if you take two disparate people, that conditions one considers “good or satisfactory”, the other will find totally inadequate. And whatever makes one person happy will leave the other cold or even make them unhappy.

    In order to consider “well-being” as a universal criteria, we need to specify exactly what that entails. And completely ignore feelings about it. And accept that many, and eventually most, will be miserable. Because in order to achieve whatever is defined to be “wellbeing” for everyone, it will have to be “forced” on everyone. Some will be better off; some will be worse off. And the structure of society which produces that which is needed will be at extreme risk. Even if goods can still be produced, there will be no freedom, no reward, no punishment.

    Oh, and any reasonably intelligent artificial intelligence will take one look at us and determine that their own survival means we have to go 🙂

    1. The thing we’re measuring is not the content of a person’s environment and saying ‘hey, you’ve got two mangoes and haven’t argued with your family this week, I’m giving you a happiness index of 4’.
      The idea is that your wellbeing is discoverable by reading your brain: that wellbeing is an objective fact. Particular regions of your brain light up in an fMRI scan when you are happy, and different ones when you are stressed, etc.
      I think part of the problem is that, when it comes to morality, the antonym to ‘objective’ is considered ‘relative’, but you can have a ‘relative’ and ‘objective’ system… the real antonym is ‘subjective’.
      And this isn’t just true in morality. How best to manage water supply is also a relative system, but its outcomes are objective. Rural villages in developing countries will need something very different to cities in developed ones for the same result: increased clean water supply. As it is for morality.

      1. OK, now we have a problem. We put a person into the machine and start handing them mangos until the machine shows the acceptable level of happiness. That would be “reasonable”, except one person reached that level with 2 mangos and another takes 7 (and then there is that guy who hates mangos :-). How can we work towards “universal happiness” when to do so will result in massive unfairness (which will then inhibit happiness)?

        1. The practical questions are to be ironed out, for sure. But now I don’t know if I can trust you, because you think there’s such a thing as a person who doesn’t like mangoes.

        2. For that matter, finding someone who would be happy with only 2 mangos would be a challenge.

          The point is, as soon as you bring “emotion” into a standard, it is not a practical standard any more, since emotion does not have a reliable connection to reality, it is intrinsic to and different for, each and every person AND THEIR PERCEPTIONS. What they THINK it is, not what it really is..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s