Thursday, February 27, 2014

Answer to Sam Harris's Moral Landscape Challenge

At the start of his website’s FAQ for the Challenge, Sam Harris summarizes what he calls his book’s central argument. That summary is clearly invalid: he slides from the assumption that moral values “depend on” facts having to do with conscious creatures, to the conclusion that morality itself has scientific answers. This is like saying that because land-dwelling animals depend on ground beneath their feet, biology reduces to geology.

As for the implicit argument in The Moral Landscape, as I interpret it, that argument is also flawed. Harris thinks that because morality has to do with the facts of how to make conscious creatures well, and these facts are empirical, there’s a possible science of morality. Putting aside the question of what exactly counts as science, let’s consider whether any kind of reasoning tells us what’s moral. Take, for example, instrumental reason, the efficient tailoring of means to ends. If we want to maximize well-being and we think carefully about how to achieve that goal, we can, of course, help to achieve it. Is that all there is to morality? No, because instrumental reason—as it’s posited in economics, for example—is neutral about the preferences. This kind of rationality takes our goals for granted and evaluates only the means of achieving them. So we can be as rational as we like in this sense and the question will remain whether our goals are morally best.

Russell Blackford makes the same point and Harris replies that a utopia in which well-being is maximized is possible, and so if a bad person’s preferences stand in the way of realizing that perfect society, we might as well change that person’s way of thinking, even by rewiring his brain. Presumably, we could do that—just as we could turn an altruist into a psychopath. Reason alone doesn’t tell us which would be the superior person, so Harris’s response here merely begs the question.

For another example of reasoning, take the basic scientific aim of telling us the probable facts, through observation and testing of hypotheses. Conceivably, scientific methods could uncover facts of how Harris’s utopia would work and they might even lay out a roadmap for how to perfect our current societies. As Harris says, some present societies might be better than others, given the ideal of maximizing well-being. If empirical reasoning could tell us that much, would that answer the central moral questions?

No, because as Harris admits, science wouldn’t thereby show that the maximization of well-being is factually the best ideal. Rather, a science of morality would presuppose that utilitarian ideal as being self-evident, just as medicine presupposes the goal of making people healthy, as Harris says. This analogy is flawed, though, because doctors can fall back on the biological functions of our organs, whereas moral aims needn’t be the same as what we’re naturally selected to do. Medical doctors try to make our bodies function in the way that maximizes our species’ fitness to carry our genes. Note how much harder it is to explain what counts as mental health. This is because the question of which mind is ideal is partly a normative one, and science apparently doesn’t address it.

So is our well-being self-evidently what we all ought to pursue? No, for at least two reasons. First, we may not deserve to be happy or we may be obligated to suffer because too much well-being would be unseemly in the indifferent universe that we have no hope of altering. This is roughly the point of the Christian doctrine of original sin—which may be neither here nor there for secularists, but this doctrine also reflects the ancient Eastern religions’ pessimism about natural life. Instead of trying to be happy, says the Hindu or Buddhist, we should ideally resign ourselves to having a detached and alienated perspective until we can escape the prison of nature with honour. Robert Nozick makes a similar point with his Happiness Machine thought experiment: living in a computer simulation might maximize well-being in terms of our conscious states, but people tend to feel that that narrow flourishing would be undignified, under the circumstances.

Second, “well-being” is a vacuous placeholder that must be filled by our personal choice of a more specific ideal. Harris says otherwise, because he thinks his Good Life and Bad Life illustrations point us in the direction of the relevant facts. But notice that his heroine leading the good life is on a slippery slope to suffering in the way that Oskar Schindler suffers at the end of Spielberg’s movie. “I could have done more,” Schindler says in horror. The fact is that the more empathetic we feel, the more we must personally suffer because in that case we must suffer on behalf of many others. So a world in which we prefer to maximize collective well-being is simultaneously (and ironically) one that maximizes individual suffering, and that’s so even though many people would come to our aid in such a world. A selfless person can’t accept aid or even compliments that could just as well go to other, more needy folks. Indeed, those with altruistic motives intentionally sacrifice their personal well-being, because they care more about others than themselves.

Thus, in so far as the ideal of well-being includes the goal of personal contentment, this ideal is opposed to the moral one of altruism, of maximizing (other) people’s happiness. Which goal is preferable isn’t up to pure reason of any kind. Rather, as with all our core values, we must ultimately take a leap of faith that our personal stamp is worth putting on the world. And in so far as so-called rationalists would presuppose Harris’s utilitarian ideal, they’d clash with pessimists, existentialists, world-weary misanthropes, melancholy artists, esoteric Hindus, Buddhists, and the like. Moreover, in light of how extroverted Western norms have tended to become more global, not reason but force and a crass lowering of standards would likely settle the matter.

6 comments:

  1. "The fact is that the more empathetic we feel, the more we must personally suffer because in that case we must suffer on behalf of many others. So a world in which we prefer to maximize collective well-being is simultaneously (and ironically) one that maximizes individual suffering, and that’s so even though many people would come to our aid in such a world. A selfless person can’t accept aid or even compliments that could just as well go to other, more needy folks. Indeed, those with altruistic motives intentionally sacrifice their personal well-being, because they care more about others than themselves."

    Seems to me this can be accounted for in a naturalistic, world-bound pain/pleasure scheme. A nail in the foot pushes the body in one way. Cognitive dissonance pushes it another. Guilt another still. The altruist tries to minimize the displeasure caused by (possibly empathy-driven) guilt by trying to minimize the displeasure of others. The altruist is selfish (in a good way!) and that sort of selfishness leads potentially to less net suffering in a broader society. You might counter that this is reductive and reductive is bad for reason X or that there's a superior framework that involves transcendence but I don't see an explanatory gap here.

    ReplyDelete
    Replies
    1. An altruist may want to maximize collective pleasure, but anyone who would be driven to such altruism would be motivated by personal suffering due to the characteristic of empathy, which is the ability to suffer in response to other people's situation.

      Thus, the goal of maximizing collective well-being is in conflict with the goal of maximizing your own well-being. Those who care about their own happiness will learn to suppress the pains caused by their conscience and to ignore the plight of the masses. We lose sight of this conflict when we focus on the utilitarian's pseudo-scientific mathematical rhetoric. It's one thing to talk about balancing pleasure and pain ratios or maximizing net pleasure, and so on, but it's another to consider the sort of people who would naturally be driven to act in that moral way. Those people would be incapable of personal happiness, contrary to Sam Harris's rosy portrayal of them in his Good Life scenario.

      Delete
  2. I agree with much of your take here, especially criticism of Harris on the fact/value issue, but still have two complaints.

    I ran a marathon in January. There was a lot of pain involved. Various bodily systems that evolved to run for survival reasons were running for postmodern goal-oriented reasons. My authoritarian brain fought a war with my muscles, stomach, heart, liver, etc. and won. I was better off, experientially/phenomenally, and probably physically (if being healthy is good), for having done it. Substituted one pain/pleasure cocktail (laziness, boredom, comfort) for another (physical discomfort, chemical highs, accomplishment feelings/ego highs). The suffering was an absolutely necessary part of the experience. I don't think you can have art without suffering, either. Sex relies on a release of tension and withdrawal from the high. Even the best things in life have a pain element, even if it's only the non-eternity of the pleasure. It's hard to believe Harris would deny this, though I haven't read his book. So there's a potential strawman with Harris ---> utopia ---> zero pain.

    Second, I agree that pain/pleasure as experienced cannot be locked down mathematically. This was part of your response to antinatalism, as I recall. And Harris' technocrat manipulation plan brings to mind a shock therapy Freudian horror show. Math heuristics evolved for physical objects, I think RS Bakker would say.

    But I think that, to say, as you do, something as strong as "those people would be incapable of personal happiness" (where I assume you mean happiness as experienced) you need a mathematical element, if only a vague one. You seem to be saying that an altruistic person would experience, say, -5 happy units, to produce +6 (or whatever) happiness elsewhere, without gaining those -5 happy units back for self via some feel good ego altruistic mechanism. Or something. I'm confused.

    On the other hand, beating children leads to brain damage and a condition I think we'd have to call poor mental health. Now talking about mental health as "poor" or "good" is a value judgment but only in the sense that saying that 1+1=2. In the end, both are responses to bodily systems that push us in certain directions. One is a straightforward product of cognitive dissonance, the other a murkier, more complex response to something similar. Both are physiologically based. I could say I don't care if adults beat children but I couldn't say it honestly, any more than I could have said, at mile 24, "my calves feel spectacular right now!" I could even recommend that adults beat children but then the entire ideational world that's built up in me (where I'm basically a decent person, and all my decent person-y beliefs) would be put under great pressure (causing pain) and the smart thing would be to keep things relatively coherent by being anti-child abuse, insofar as consciousness is some kind of me that controls such things. (For the record, I strongly advise that no one physically or emotionally abuse any child. Hope I didn't give the wrong impression there.)

    So I think science does have something to say about general well-being but also that it depends on individuals to determine what this is. The child does not want to be beaten. The idea of a scientist thinking he knows what's best for someone to the point of rewiring brains is terrifying and absurd at the same time. It's imperialist, and anyone who understands how that works knows it never goes well for the rewired countries.

    ReplyDelete
    Replies
    1. Yeah, I'm not talking about happiness as the absence of pain. Harris leaves it open what he means by "well-being"; as I said, it's a vacuous placeholder that will be filled in differently by different people, thus leaving the moral question of what our goals should be quite open and unscientific. Harris comes closest to specifying what he means by "well-being" with his Good Life illustration, and that's the one I criticize with the point about Oskar Schindler.

      I'm talking about happiness in the sense of contentment and I'm saying that empathy causes anxiety which conflicts with contentment. So in the Good Life, the woman seems content because she has a fulfilling job and social life, but I'm saying that if she's driven to help people because she feels their pain (and that would be the primary cause of altruistic behaviour), she'll feel guilty about her comfort and angst-ridden when she learns of all the people she can't help. There's a real conflict here between feeling content and feeling anxious. Those who are most personally content are less altruistic because they're less conscientious. The more narrow-minded you are in your empathy, the less anxiety you'll feel in a world in which billions suffer.

      And the conclusion I draw from this is just that there's a nonscientific choice here, between personal happiness (your own contentment) and collective happiness (which requires altruistic self-sacrifice). Science doesn't tell us which is better. Buddhists and other grim ascetics would say that the goal of personal happiness/contentment is foolish, because suffering is inevitable (including the suffering from empathy), so they detach from their cravings and thus sacrifice a part of their mind to feel the peace that comes from a sort of non-being. Science doesn't tell us whether that sacrifice is good or bad.

      All science can tell us here is whether achieving some goal is an efficient way of achieving some other goal. That is, science can expose the instrumental relationship between our goals and inform us about the probabilities involved. I say this in the first half of the article. So I agree that "science does have something to say about general well-being"; namely, science can tell us how to achieve our goals once we've already decided on our ultimate ones. Unfortunately, this isn't Harris's thesis. He belittles the philosophical, nonscientific choice between moral axioms, which is the choice between our ultimate goals, including the goals of personal vs collective contentment.

      Delete
  3. Your points are well spoken, and although they contain some inaccuracies (such as your personal interpretation of the Moral Landscape), arguing with you over them is semantic.

    What I find most disturbing about this piece is your interpretation of happiness. Firstly that humans (created equally) can be more or less deserving of happiness. Especially seeing as you had just raised the issue of mental health, do you really believe some people don't deserve happiness? And who gets to judge that? (Please don't say your subjective God will, you can believe what you want to, just don't force it on others).

    Secondly, and most importantly in my mind, is your general narrow-mindedness. It simply seems as though you don't think humanity should strive for happiness and the improvement of the livelihoods of people and other conscious animals alike. If you DO believe we should be improving things for everybody - without being altruistic martyrs - what exactly do you suggest as a better alternative than the one raised by Dr Harris?

    ReplyDelete
  4. You're saying that my interpretation of the book's main argument is inaccurate? What I say is just that "Harris thinks that because morality has to do with the facts of how to make conscious creatures well, and these facts are empirical, there’s a possible science of morality." If that's wrong, what do you think the book's main argument is?

    As for happiness, you might be interested in my YouTube video on the subject (link below). I don't think this is an issue of "narrow-mindedness." Rather, it's a conflict between Eastern and Western philosophies. Anyway, I'm not arguing here against happiness. All I'm saying is that science doesn't tell us that our highest ideal should be happiness. The fact that Eastern traditions say otherwise tells us it's a matter of normative philosophy and religion, not science.

    I'm an atheist, by the way.

    https://www.youtube.com/watch?v=wCT7VXKO110

    ReplyDelete