Mining in the Moral Landscape: explaining why Sam Harris’ moral framework is still better than a religious one

I like addressing and engaging with challenges to the ideas that I share here (I run a constant ask me anything policy). A blogger called Debilis shared a few challenges to The Moral Landscape that I want to discuss (on my post The Evidence for Objective (secular) Morality). I may not be the best person to discuss the issues, of course, because I am not the author of the book. But I consider myself an articulate writer (even though my blog is rife with errors and omitted words—grammar Nazis be warned!). I also think that I have given the issue a lot of thought, so I’m going to give answers a go.


Is this a commitment to total wellbeing or average wellbeing?

In a population of the same size, the total and the average are functions of the same thing; there is no difference. However, issues come up when you consider a choice between two actions that lead to two very different worlds: one with a low population, where everyone has a high wellbeing; against one with a very high population where everyone has a low wellbeing. For those that are having a hard time picturing this, consider the following:

Reality 1

Population: 1,000

Average wellbeing units, per person: 90

Total wellbeing units: 90,000

Reality 2:

Population: 50,000

Average wellbeing units, per person: 1

Total wellbeing units: 50,000

Assuming that ‘wellbeing units’ (which I’ve made up) can be negative, and a wellbeing unit score of 1 still isn’t suffering, which of the two realities above should we aim for?

I first want to discuss the issue with the question. I cannot conceive of a way any person would be faced with a choice where the implications are as significant as above, without there being a lot of murdering involved. And murder causes suffering: it cuts off people from the wellbeing they could have experienced; it causes feelings of loss among those that loved and knew the people; it creates fear and panic in those that think another person has taken it on themselves to take other lives. So, I think the question lacks power when you consider no action can realise the choice in an isolated fashion. No dramatic change in population will occur without the associated lowering of wellbeing that comes with either great deaths or dwindling resources.

That said, my intuition is to defend average wellbeing. I am very Malthusian in this way, and I am very aware of the Tragedy of the Commons (I try not to drop in terms like that without an explanation, so I’ve linked them to Wikipedia). To introduce new people into a system, knowing that it will lower the wellbeing of the existing system seems immoral to me. But even then there is a measure. Consider the merging of these two populations:

Population 1:

Population: 50

Average wellbeing units, per person: 100

Total wellbeing units: 5,000

Population 2:

Population: 50

Average wellbeing units, per person: 10

Total wellbeing units: 500

The average wellbeing of these two populations is 55. Imagine, now, the merging lowers the wellbeing of Population 1 from 100 to 90; Population 2’s wellbeing goes up from 10 to 60. The average wellbeing of the system then is 75.

Merge, to become Population 3:

Population: 100

Average wellbeing units, per person: 75

Total wellbeing units: 7,500

Is Population 1 morally obliged to incur this wellbeing loss? I think so. Can we very easily sympathise with Population 1 resisting this? Of course. And violent resistance will alter the average (and total) wellbeing.

For those that haven’t figured it out, my answer is that I don’t know the answer to the question. I suspect the question is moot, and I don’t think it is a fatal flaw. Practically, it is mere detail (although the philosophical power of the question is clear).


If consciousness is key (as I gather from your reference to Sam Harris), are those whose consciousness is inhibited exempt from moral worth?

Yes, consciousness is key. Yes, it does mean we should consider people and animals who experience consciousness, suffering and happiness differently in the light of their own experience. This does not mean the suffering of a person with a certain psychological condition is of different moral “worth [sic]”. Just as avoiding suffering is one horn of The Moral Landscape, fostering and encouraging happiness is another.

The question extends beyond learning difficulties, though. What about people in comas? Assuming it is true that a person in a coma is neutral to suffering and happiness (I’m not sure this is a valid assumption; I don’t know whether they are capable of registering things. But I shall run with it) the question of moral considerations to coma patients boils down to a few questions:

  • Will they ever wake up?
  • How will the family feel about decisions you make regarding the coma patient?
  • How many people are being deprived of medical care, and to what extent, to pander to the wellbeing of the family? To word that differently, is maintaining the coma patient lowering or stagnating the average wellbeing of the hospital/family system?

If I am wrong in my assumption that a coma patient is neutral to happiness and suffering, then the coma patient needs to be considered like any other thing that experiences wellbeing.


As there are possible worlds where the peaks of well-being are not characterized by anything we’d normally call “moral”, does it make sense to simply identify morality with well-being?

It is possible I’ve been a bit sloppy with definitions, and if so I’d like to take this opportunity to clarify. Morality is still about the decisions of conscious creatures. If a sunny day heightens your wellbeing, the sunny day is still not moral. A sunny day is not conscious. There will be a lot of background noise to any graph that attempts to plot wellbeing because of variables like weather, hormone cycles, personal health and other non-conscious variables. Only the changes caused by decisions made by conscious creatures count towards morality. This may make the moral truth hard to pinpoint and difficult discern, but it does not make it not true.


Do we consider the well-being of future generations (who do not yet, and may never, exist)?

Yes. It is intuitively obvious that making the entire planet happy, now, by depleting all the natural resources and polluting the atmosphere is immoral. This is because future generations will have to make do in a polluted and depleted world. And the wellbeing of future generations will be a real thing. It will be true. Historically, we can know that medical advances have been moral by the decrease in disease-based suffering over the last few hundred years.

There is an obvious practical issue: the existence and conditions of future generations is unknowable. We will have to be guided by our best understanding. Live in the context of what you do know! We know that low biodiversity and limited access to natural areas causes misery; we know that starvation and living on the brink of survival is unhappy; we know that infant mortality and no access to medical care is not preferable. And we know that future generations probably will exist (after all, future generations have existed ever since life began; hence life). And we know (approximately) the impacts we are having through time. So we should consider future generations, and we should be guided by our best guesses. We may be wrong. But if we are wrong, it will still be a discoverable truth about reality!


Is it a moral duty to spread a falsehood (say, belief in a false religion) if it will increase well-being?

Theoretically, yes. And in the short-term, lies may well help people. But in the long-term, coming face to face with reality and having a lie you have believed challenged often causes more harm than good: it damages relationships and trust; it causes and inner turmoil. Sam Harris has written a short book called “Lying”, it doesn’t take long to read, and I implore you to read it. It covers exactly this.

With specific regard to religion, I think it gets worse. There are some people, and I talk to some of them on here, who honestly are happier with their religion. But I have friends who have to live with their idea that I am going to Hell and that God refuses to stop things like earthquakes and tsunamis. Another religion-specific issue is that it causes you to blame yourself for your failures, but delegate rewards of success to God.

So, in theory, if a lie could keep everyone happy it would be immoral to destroy that lie. But from my experience no such lie exists.


Most importantly, how can this position show, rather than assume, that it is objectively true that we ought to promote well-being?

Show me anything that proves (or demonstrates) itself and it is circular reasoning. I think this question is the “is/ought” dilemma, which is not even nearly as complex or as difficult as some make it out to be.

Originally I defined “morality” according to linguistics. When people talk of morality they are talking of decisions made based on wellbeing. There can be other bases to your decision-making, like economics or selfishness or flipping a coin or religion. But when your decisions are based on wellbeing we call that morality. Nothing about that obliges you to do anything. I have not provided you with a single “ought”.

“Ought” comes in later. If I want my car to run smoothly, I ought to change the oil. Nothing obliges you to change the oil expect for your own want and some basic mechanical knowledge of what is. If I want to be clean I ought to shower. Nothing about the word “ought” is tied up in morality. Consider this: “if I want to purge the German people of Jewish pests I ought to kill all the Jewish people”. If you define morality as what you ought to do, anyone who acts according to what they want cannot be said to be immoral. However, if you mean to define morality as “what you ought to do, according to God’s commands” you are making the same leap: from what is (God’s commands) to some value judgement on what you ought to do (obey, either out of respect or out of fear of Hell or desire to go to Heaven).

The definition I am using for morality does not make any “is/ought” leaps. It simply says that if you say you are being moral (i.e. motivated by the safeguarding of wellbeing) then we can measure to what extent you have succeeded.


5 thoughts on “Mining in the Moral Landscape: explaining why Sam Harris’ moral framework is still better than a religious one”

  1. This is an excellent response!

    Just to continue the discussion, I’ll add some thoughts. But my main reaction is that this is very nearly the positions I would have taken, were I defending this form of utilitarianism.

    So, let’s see…


    I personally lean toward “average well-being” as well. My trouble with embracing it entirely is that it really does seem to be plausible that raising the average well being of the planet could be done through means such as mass-murder. In particular, those who have few to no social relations (who’s deaths would cause the least pain) would also tend to be the least happy (who’s deaths would raise the average well-being).

    This is not to say that there isn’t good reason to want to raise the average well-being, but that this looks like an incomplete theory on its own. It leads me to suspect that there is a deeper layer to morality that hasn’t yet been reached.


    I think the issue of consciousness is the area where I agree with you most fully. I think “well-being of conscious creatures” needs to be expanded to “well-being of creatures capable of consciousness” for this to work.

    Of course, that opens up the issue of what counts as “capable”. But I think this is reasonable in general. But, without that addition, I think there are some very disturbing implications.


    Regarding the idea that well-being doesn’t always line up with moral choices:

    While I completely agree that there will always be background noise (and that this doesn’t refute the position), I think I miscommunicated the thrust of this question.

    The point is that this moral system completely divorces intention from morality. To use a goofy example: would a mad-scientist who releases a plague upon the world which, due to an error in production, cures cancer instead of killing people, be a moral person?

    Moral intuition would clearly say “no”, but it is hard to see how utilitarianism would condemn such a person. I think most of us would agree that it is the intent, rather than the outcome, which is the key moral factor (this, by the way, is my primary reason for rejecting utilitarianism).


    I agree with your conclusion that future generations should be considered, but I don’t yet see how that is supported by the maxim that morality is that which aids the well-being of conscious creatures.

    Future generations are not conscious creatures, nor are they even (as the addition above) creatures that are capable of conscious creatures. Rather, they are potential creatures that (if made actual) will be conscious.

    I suppose a further addition could be made for concern for potential creatures, but I have two thoughts to add to this:

    1. This, again, suggests to me that (while aiding the well-being of others is undisputedly a good thing) the core of morality is deeper than this moral system recognizes.

    2. While no contradiction, it is noteworthy that the addition here would almost certainly result in an opposition to abortion (though Harris himself favors abortion rights).


    In terms of spreading falsehoods, this is rather like the issue of intentionality. I’ll not repeat myself over that, but I do think the matter of religion is worth mention.

    That is, the anthropological studies I’ve encountered have shown that religious belief is, in general, healthy for society. Though Harris and others like to emphasize the negative things they see in religion (such as “belief in Hell”), the data shows that it is an overall positive force in terms of well-being.

    Now, it may not be good for those who disbelieve in religion to promote believe dishonestly (I suspect it isn’t, personally), but it would certainly be immoral for atheists to actively campaign against religious views, given this system.

    But this runs counter to my moral intuition. Even though I happen to be a theist, I see nothing immoral about an atheist attempting to persuade people to what she believes to be the truth, even if it is damaging to social cohesion.

    So, again, this is not to say that I don’t see great moral value in seeking the well-being of others, but that I suspect this theory of incompleteness.


    As far as justifying the maxim that morality is all about the well-being of conscious creatures, I do disagree here. Apologies though, I’d meant to completely avoid arguing from a theistic stance (almost made it!), but need to do so here.

    The is/ought problem is only a problem from a materialistic perspective. Many other views have absolutely no trouble with it.

    This is not, by the way, because “God said it” is used as any kind of fundamental justification for the existence of “oughts” in most forms of theism (only the most puerile use this), but because many views of reality reject the idea that all facts are material facts. It was the embracing of this view (that facts are material) which created the is/ought problem; it did not exist before this.

    While your response is, in my view, a very strong one in the context of a discussion among materialists, I don’t think it has any purchase against those who reject materialism.

    Really, it strikes the latter group as a particularly clear reason why materialism is an incomplete metaphysics.


    Those are my thoughts, anyway. I really appreciate the post. It is clearly the product of an intelligent mind, and I’m glad to have read it.

    So, thank you for taking the time to go through those questions!

    1. Thank you for your thoughts.
      The intention/consequence is an interesting scenario. But if you knew the scientist meant to kill people you would want to stop that person because they pose a great risk to wellbeing. Their mistake, their error may make the action a moral one, but the person’s wish to pose a threat to wellbeing does make them a risk (and incites fear, which is also a lowering of wellbeing; it sounds petty, but if you have ever lived in real fear then you know how bad it can be…).

      1. Greetings once again!

        I completely agree that one would want to stop that person. So it makes sense, under this ethic, to take action to restrain such a person.

        My only issue is with calling his action a moral one. It seems odd to call such a person moral (or to call a person in the opposite scenario immoral).

        I think we’ve definitely left what most would call morality, and are instead talking about practical policies for human beings. While these are important, my reaction is that the term “morality” is misleading.

        Okay, those are my thoughts.
        Otherwise, best to you out there.

  2. Harris then makes a pragmatic case that science could usefully define “morality” according to such facts (about people’s wellbeing). Often his arguments point out the way that problems with this scientific definition of morality seem to be problems shared by all science, or reason and words in general. Harris also spends some time describing how science might engage nuances and challenges of identifying the best ways for individuals, and groups of individuals, to improve their lives. Many of these issues are covered below.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s