Skip to main content
FreeBook Reviews

McGrath, Sarah. Moral Knowledge. Oxford: Oxford University Press, 2020. Pp. 240. $65.00 (cloth).

Moral knowledge can seem impossible. After all, while we understand fairly well how we can know what is the case, how on earth could we know what ought to be the case? But in Moral Knowledge, Sarah McGrath clearly and powerfully argues that we can acquire moral knowledge in all the ways we come by ordinary empirical knowledge. Just as I can know that it’s now raining by perception (seeing and feeling the raindrops), by inference (the people outside are using umbrellas), or by testimony (my mother outside is texting me about the weather), so too can I gain moral knowledge by any of these channels. Moreover, we can lose moral knowledge in any of the ways we can lose ordinary empirical knowledge. Moral knowledge is thus radically domesticated.

Here I’ll summarize several of the main threads of this excellent book, and along the way I will raise just a few worries about the arguments therein. The book has an introductory chapter, a concluding chapter, and, in between, four substantive chapters, each of which is devoted to one particular subtopic: the method of reflective equilibrium (MRE), testimony and expertise, observation and experience, and losing moral knowledge.

The chapter on MRE is the best discussion of the topic of which I am aware. McGrath is aptly pessimistic about its powers. She argues that on its most defensible interpretation, MRE takes for granted that we typically already have some moral knowledge, knowledge that the method hopes to extend by making our moral views more coherent. But this means that we don’t get all our moral knowledge from MRE. Much like testimony (discussed later), MRE can extend only what already exists.

Agreeing with many reflective equilibrium theorists, McGrath argues that when we reflect on our moral convictions, we should prioritize neither our general moral views nor our lower-level moral judgments. Neither is automatically privileged with respect to the other. If we weren’t justified in being confident about one level, we wouldn’t be justified in being confident about the other level; and if confidence in neither were justified, then MRE couldn’t take us anywhere good. Fortunately, we do already have some moral knowledge, and so MRE can extend it modestly. MRE, however, might be best at delivering not moral knowledge but moral understanding. When we align our general moral views and our particular moral judgments, we better grasp why those particular moral judgments are true. The more general principles can explain the facts captured by our particular moral views, and grasping these explanations is one form that moral understanding takes.

If MRE is better at delivering moral understanding than moral knowledge, the opposite can be said, McGrath argues, about the method of testimony. Those familiar with her articles know that she holds that moral testimony, whatever its problems, is indeed a way to gain moral knowledge. She defends the Moral Inheritance View, the view that one way to gain moral knowledge is by adopting the beliefs of those around us, even if we are unable to cite the reasons grounding what is known. Just as we can know via testimony that it will snow tomorrow, even if we lack access to the concrete reasons that testifiers have, so too we can know via testimony moral truths, despite lacking direct access to the concrete reasons that testifiers have. This falls out of a more general principle she adopts and defends: in the absence of some compelling reason for thinking otherwise, we should assume that the standards of knowledge do not vary from domain to domain. So, it’s not difficult in principle to gain moral knowledge by testimony. Even so, she argues that we typically don’t have an independent basis for attributing moral expertise to others: we esteem those whom we already agree with. So, there is little genuine opportunity for adults to gain moral knowledge from those they recognize as having it. At best, they can hope thereby to calibrate and refine their own moral judgment.

Although testimony is indeed a source of moral knowledge, McGrath argues that the epistemologically interesting issues concern not moral testimony per se but the broader issue of moral deference. The putative problem is that if you hold a moral view because you’ve deferred to someone else, then you typically don’t understand why that view is true. This is problematic for at least two reasons. First, when you judge something to be wrong, you are expected to be able to cite facts in virtue of which it is wrong. But if you have completely deferred to the view of another, then, she argues, you won’t be able to meet this expectation. (One, however, should wonder whether it is difficult to learn these facts also by testimony.) Second, it’s an ideal of moral agency to be able to do the right thing for the right reason, but if you know only what’s right and don’t understand why the right thing is right, then you won’t be able to do the right thing for the reasons that make it right. So, acting on the basis of moral deference is, at best, second-best.

I’ll pause my summary to briefly flag a couple of worries about this criticism of moral deference. As I’ve argued elsewhere, in typical responsible cases of (adult-to-adult) moral deference, the hearer does grasp the various operative reasons (or goods and bads) at stake but defers to a speaker about how to weigh them up. For example, if you are a minimally competent adult, you already know that it’s pro tanto bad to allow five people to die and that it’s pro tanto bad to kill one person, yet you remain unsure whether it’s wrong to turn the trolley or to kill a healthy patient for their organs. Thus, you might defer to someone in a better position to know such things. But even if you do so defer, you could still (1) cite facts in virtue of which one of the options is wrong (“That’s allowing five people to die!”) and (2) do the right thing for the right reason (“I’m turning the trolley to save five people.”) So, I think that McGrath doesn’t completely show that moral deference is problematic in the ways she describes. Of course, I concede that it is possible to morally defer in ways that are problematic; one probably shouldn’t defer to those who one thinks are in a worse position to know what’s so. But one can problematically defer about nonmoral matters too, if one is reckless or lazy. It is not clear to this reader that there is any peculiar problem with moral deference.

Like MRE, however, testimony can’t be the principal source of moral knowledge; moral truths first have to be known by someone in some other way. Thus, in the most ambitious chapter, McGrath argues that experience and observation can contribute to moral knowledge in the very ways they contribute to ordinary knowledge. One way that experience contributes to moral knowledge is by enabling us to entertain the relevant contents: you can know that, say, murder is wrong only if you have the concept “murder” and experience can enable you to grasp that concept. Experience can also trigger moral knowledge. As a young man, Einstein was an absolute pacifist, but witnessing the Nazi era led him to conclude that violence could be just. He might never have thought about the matter further absent such an experience. Experience, then, can both enable and trigger moral knowledge.

More ambitiously, McGrath argues that observation and experience can confirm and disconfirm one’s moral views, even those views that are also knowable a priori. Nonmoral observation can disconfirm one’s moral views, because it can make one’s overall view less coherent in a way in which it is most reasonable to rectify this incoherence by giving up one’s original moral view. Likewise, when nonmoral observation makes one’s overall view more coherent, it thereby tends to confirm one’s moral views thus implicated.

It’s worth discussing one of McGrath’s main examples in arguing for these claims. Suppose Ted initially believes both (1) that same-sex marriage is intrinsically wrong and shouldn’t be condoned and (2) that social recognition of same-sex marriages would have bad consequences, including leading to an increase in the divorce rate. On this second thought, even though Ted thinks that the wrongness of same-sex marriage is not because of any bad consequences flowing from its recognition, he still believes it would lead to bad consequences at least in part because it’s (already) intrinsically wrong.

Now suppose that at some later time Ted observes that the legal recognition and social acceptance of same-sex marriage do not lead to an increase in the divorce rate, nor to any other significant bad consequences. Ted’s views are now less coherent. He could adjust his views in various ways to make them coherent again, but suppose he retains the view that recognition of intrinsically wrong practices leads to bad consequences but decreases his confidence that same-sex marriage is wrong. This adjustment is rational, and it shows how Ted’s original view that same-sex marriage is wrong may be disconfirmed by observation.

Let me flag a worry before proceeding to the next topic. Ted’s moral view is disconfirmed by observation only because he is confident in the complex conditional: if same-sex marriage is wrong, then if same-sex marriage becomes socially accepted, the divorce rate will rise. Observation shows him that the main consequent of that conditional is false. So, if Ted retains his confidence in the conditional, he will need to lower his confidence in the main antecedent (viz., same-sex marriage is wrong). This is how observation can disconfirm a moral view.

But this structure is available to any domain, not just morality. Suppose Ted also initially believes, contrary to Euclid’s theorem, that there is a largest prime number. He also holds that if there is a largest prime number, then if Euclid’s theorem and other false mathematical views become widely held, technological advancement will decline. Suppose, however, he observes that as more and more schoolchildren are learning Euclid’s theorem, technology continues to advance at a rapid pace. Ted’s views are now less coherent. He could adjust his views in various ways to make them coherent again, but suppose he retains the view that widespread mathematical ignorance hurts technological development but decreases his confidence that there is a greatest prime number. This adjustment is rational, and it shows how Ted’s original view that there is a largest prime number may be disconfirmed by empirical observation.

But Euclid’s theorem, of all things, isn’t confirmable empirically. Proof seems necessary. Ted’s empirical observation that nothing bad happens as social acceptance of (same-sex marriage/Euclid’s theorem) increases “disconfirms” his own false view about the badness of (same-sex marriage/Euclid’s theorem), only because he also holds the rather specific complex conditional(s) noted above. Those who doubt that morality is empirically confirmable are unlikely to be persuaded by such examples to change their minds, and reasonably so. After all, the lesson seems to be that if Ted starts with both (1) a rather odd view about how the truth of a moral/mathematical view covaries with something empirical and (2) a wildly false moral/mathematical view, then empirical observation can make it more reasonable to revise the latter than to revise the former. But that’s only because of how bad the latter view is. It’s probably more sensible to think that Ted should give up both views.

Moving on, the final substantive (and most original) chapter discusses the question whether one can lose moral knowledge. Gilbert Ryle famously claimed that it was ridiculous or absurd to say, “I’ve forgotten the difference between right and wrong.” If correct, this might seem to show that moral knowledge is not like knowledge gained by expertise or ordinary empirical knowledge, which can be forgotten. Ryle ultimately argued that someone who can no longer distinguish right from wrong hasn’t forgotten anything but has simply stopped caring about the right things.

McGrath, then, must respond to Ryle’s challenge. She does so by arguing that while you can indeed lose moral knowledge, doing so corrupts you in a way that makes it difficult and awkward to recognize that you’ve been so corrupted. Someone who once knew the difference between right and wrong but no longer does would now be in a bad position to say, “I’ve forgotten the difference between right and wrong,” even when that proposition is true. It’s what some call a blind spot proposition. What makes it absurd to say that you’ve forgotten the difference between right and wrong is not that this proposition can’t be true but that when it is true you’re not in a good position to recognize its truth. So, while ceasing to care about the right thing is indeed one way to lose moral knowledge, McGrath aptly argues that there could still be other ways to do so, including by forgetting things.

How can a person forget what’s right and wrong? McGrath notes that one can cease to know something because one’s justification for believing it becomes undermined by the presence of a defeater. Suppose, for instance, that you’ve correctly calculated the sum of a large set of numbers. But then you also realize that two other people working on the same problem arrived at a different answer, that you’re tired, and that you’ve been distracted lately, and now your justification for believing that your answer is indeed the sum has been weakened. So, although once you knew what the sum was, now you don’t. Mathematical knowledge can thus be lost, and something very similar can hold true for moral knowledge (although, I think, it would still be odd to call this an instance of forgetting).

Space doesn’t permit me to summarize McGrath’s critical discussion of Ronald Dworkin’s view that our moral beliefs are relatively immune to being undermined by discoveries about their etiologies, but her reply to Dworkin is one of the most compelling arguments of the entire book, and I’ll leave that as a teaser for you to check it out for yourself.

I’ll close with a final thought about Moral Knowledge, the best book on its topic of which I am aware. Some have ambitiously thought that moral knowledge is distinctively first-personal: you grasp moral knowledge in virtue of being self-conscious. A complete account of moral knowledge would investigate this thought too. That book, however, remains to be written, but I hope whoever writes it does so as lucidly and cogently as McGrath here investigates the empirical sources of moral knowledge.