That’s an old principle, often attributed to David Hume if I’m not mistaken. It means that there’s no chain of reasoning that takes you from factual statements about the way the world is to normative statements about the way things should be.
That’s not to say that factual statements are irrelevant to ethical questions, just that when you’re engaged in ethical reasoning you need some sort of additional inputs.
Religious traditions often give such inputs. For the non-religious, one common point of view is utilitarianism, which is the idea that you ought to do the things that will maximize some sort of total worldwide “utility” (or “happiness” or “well-being” or something like that). The point is that in either case you have to take some sort of normative (“ought”) statement as an axiom, which can’t be derived from observations about the way the world is.
For what it’s worth, I think that Hume is right about this.
The reason I’m mentioning all this utterly unoriginal stuff right now is because I want to link to a piece that Sean Carroll wrote on all this. In my opinion, he gets it exactly right.
Sean’s motivation for writing this is that some people claim from time to time that ethics can be (either now or in the future) reduced to science — i.e., that we’ll be able to answer questions about what ought to be done purely by gathering empirical data. Sam Harris is probably the most prominent proponent of this point of view these days. If Hume (and Sean and I) are right, then this can’t be done without additional assumptions, and we need to think carefully about what those assumptions are and whether they’re right.
I haven’t read Harris’s book, but I’ve read some shorter pieces he’s written on the subject. As far as I can tell (and I could be wrong about this), Harris seems to take some sort of utilitarianism for granted — that is, he takes it as self-evident that (a) there is some sort of global utility that (at least in principle) can be measured, and that (b) maximizing it is what one ought to do.
Personally, I don’t think either of those statements is obvious. At the very least, they need to be stated explicitly and supported by some sort of argument.
I assume we both accept that morality is doing what is good. Good and bad, however, have no existence outside minds. Good (a.k.a. happiness), therefore, whatever it is, corresponds to some class of states of some physical systems that support minds (usually a brains). As such, there is no principle preventing us from trying to measure it. As soon as we can measure it, we can correlate it to various stimuli, meaning that we will have scientifically determined what makes people happy, and thereby determined how to be moral.
Whatever makes me happiest is what I should do, by definition. There is no external motivation needed to accept this principle. There is no need to establish that happiness is the appropriate measure of utility and there is no need to explain why we should value utility.
“Whatever makes me happy is what I should do, by definition” is an axiom you are free to adopt, but it is certainly not self-evident, nor is it amenable to empirical test. In fact, to me, it’s self-evidently false. If you are a sadist who is made happy by torturing people, torturing people is not “what you should do.”
Happy is by definition the response to things that are good (if you disagree with this, just replace all instances of the word ‘happiness’ in my above comment with ‘mental states caused by good stimuli’). Goodness is by definition the degree to which something is appropriate.
“Whatever makes me happy is what I should do,” is not an axiom, or something that needs to be tested empirically, its a trivial tautology: whatever is good is good.
There are complications, of course. E.g. it is possible for somebody to be mistaken about what makes them happy. But this is where science comes in.
Yes there are sadists out there, but we outnumber them, so our morality trumps theirs. To insist that morality is only valid if everybody agrees would certainly be doomed to failure, and symmetry would seem to demand that there is no privileged arbiter of morality, either. By the way, psychopaths still benefit about as much as we do from the protections offered by a free society, so their (unmistaken) morality is probably not much different from ours anyway.
The least radical assumption you can make, at least to me, would be some sort of ethical conservatism in the quest for continuation. I could imagine some scenario in which empirical evidence shows complex systems generally resist equilibrium, thus providing an empirical basis for a normative axiom (which I guess would be ‘Don’t equilibrate!’) for the complex systems known as human beings/society. If ‘preserve society’ follows from ‘don’t equilibrate’ (which I think it might), you have traditional moral views from a nominally empirical source. I’m not sure the statement ‘complex systems generally resist equilibrium’ is true (I’m certainly no information theorist), but it at least seems plausible, though somewhat pathological.
I guess people argue that evolution explains our moral compass, with individuals who develop a sense of ‘good of all’ in social species more likely to produce offspring in successful tribes. But while that might be perfectly true, I don’t think it’s really a claim on morality, since the question just moves up one level to “Why should we continue to base morality on our evolution-given instincts?”
Ok, there’s no chain of reasoning from “is” to “ought”. There is also no chain of reasoning from how things are here and now to any claim about how things will be in the future or other places.
But we still have to make decisions. We have to get to “ought” somehow. What do you recommend?
I recommend using evidence and reason; does that make me guilty of “reducing morality to science”? What’s the alternative?
I’m all for using evidence and reason. But you need to have at least one normative axiom (that is, one axiom with a word like “ought” in it), or else none of the conclusions you draw from your evidence and reason will be normative. That axiom or axioms will not be “facts” whose validity can be assessed by evidence — they’ll just be things one has to accept without evidence.
What should those axioms be? I don’t think it’s obvious. I’m not convinced that utilitarianism is right, or even that it’s well-defined, but it’s better than a lot of alternatives, so going with something along the lines of “you ought to behave in a way that maximizes global happiness” seems like not a bad idea. It’s possible — although by no means certain, at least to me — that one could eventually come up with an objective definition of “global happiness” in terms of the brain states of conscious beings. At that point, you’d have a well-defined system of ethics with one clearly-stated normative axiom. Even if that did prove to be possible, that axiom would be something that one had to assume without evidence.
Ted, respectfully, is it perhaps you who is invoking an unjustifiable axiom, that “ought” has to come from some external principle? This assumption seems to me to be a relic of an era in which human understanding was drenched in pseudoscience and superstition. As I have argued, the words “morality,” “wellbeing,” “value,” etc. all come together to make our most preferrable choice of actions objectively determined by the situation we are in. If we want to investigate what these actions are, we need no further axioms than those that science routinely implements, as Allen has already suggested.
We live in a purely physical reality, yet “ought” is a word with no meaning for physical systems, so shouldn’t we just drop it? There is no sense asking why we ought to value wellbeing, we just do, by definition.
This is not nihilistic, world happiness and individual wellbeing can be expected to be highly correlated, and to say you don’t care about your individual happiness would be self contradictory. What we want to establish is the best way to produce wellbeing. There can be no more efficient way than rational science, except perhaps unbelievable luck, but I know what I’d prefer to rely on.
You make a number of different but related claims. Let me start with the simplest one: “The word ‘ought’ is outdated, so shouldn’t we just drop it?” First, note the odd circumstance that you can’t even ask that question without using the word “should,” an exact synonym for the word you think we should drop. That suggests that dropping it’s not so easy! But anyway, if you think that discussions involving what one ought to do are uninteresting / outdated / meaningless, I have no desire to force you to engage in them. Other people find them interesting and choose to engage in them.
To get back to the more substantive point. You say the following:
“Whatever makes me happy is what I should do,” is not an axiom, or something that needs to be tested empirically, its a trivial tautology: whatever is good is good.
I agree that this is not something that needs to be tested empirically — indeed, it is not something that *can* be tested empirically. But it is not a tautology. The phrase “what makes me happy” refers to one conceptual category. The phrase “what I should do” refers to a different conceptual category. The assertion that those categories coincide is not a tautology or a definition. It is a claim with which one can either agree or disagree.
As proof of my last statement, I will offer in evidence the fact that I strongly disagree with the statement “Whatever makes me happy is what I should do” (as have plenty of people before me). If torturing innocent children made me happy, that wouldn’t mean that I should do it. (Your response to this, by the way, is a non sequitur: it doesn’t matter that there are more non-sadists than sadists out there. Even if I were in a majority in wanting to torture people, I still shouldn’t do it.)
I offer the fact that I disagree with the statement in question, not to show that the statement is wrong, but to show that it’s not a tautology.
In fact, I suspect that our disagreements are mostly terminological. The main points, in the end, are the following:
– Different people disagree about the correct rules for deciding about statements involving the word “should”.
– Those disagreements are not resolvable by appeal to evidence.
This is true whether you call the disagreements matters of definition (as you prefer to) or choices of axioms (as I do).
I just saw your last response, I hope you don’t mind me continuing the topic, as its quite fascinating.
You and I are algorithms that seek happiness. That this is true results from the simple fact that happiness is what we seek – a tautology. If you don’t like identifying happiness with that role, then pick another word that does it for you. Its not really about definitions of words, but the intrinsic properties of the important phenomena.
(We seek other things, beside happiness, such as low gravitational potential, but morality, in that it is “doing what is right,” specifically addresses our appreciation of value, and so happiness is the appropriate parameter for this discussion.)
“What makes me happy is what I should do” is just a basic description of algorithms like you and me. That you might disagree with this doesn’t prove anything about it not being tautology.
Your extreme example of torturing innocent children has a certain rhetorical force, but what would it mean if your assertion was correct, that this behaviour is wrong, independent of any empirical consideration? What would be the source of that wrongness?
You might argue (I hope not) that this moral principle is god given, but even that extreme view defeats itself instantly. Since morally we sometimes succeed and sometimes fail, any god-given moral law (or any other absolutist version) does not amount to physical law – there is possibility for us to deviate – and so there must be something else that makes it desirable for us to be moral. Since our desires are in our heads, then in our heads is where we must look to find the source of morality.
Your argument that torturing children is unambiguously wrong, independent of one’s point of view, has the look of a defense against some kind of nihilism, but to claim that morality has no physical origin seems like a far worse kind of nihilism to me.