Sep 28, 2010

New complications for adult stem cell ethics

Adult stem cells don't raise the same moral questions that embryonic stem cells do, since embryos need not be destroyed to use adult stem cells for research. But this simple picture is changing rapidly as stem cell science advances: according to a recent article in Scientific American, it may be possible within a decade to use adult stem cells to create gametes.
  • Verifying that the pluripotent adult stem cells are safe for therapeutic use requires comparing them to embryonic stem cells. So, at least in the near future, there will be significant scientific demand for embryonic stem cells.
  • It's cheap and easy to collect tissue (like skin cells) and produce adult stem cells. Once it's possible to create gametes from adult stem cells, this means that tissue donation can amount to donating sperm or eggs; we need a good regulatory regime to deal with the obvious ethical concerns that this possibility raises. For example, if I donate some skin tissue, do I have any right to refuse that gametes be produced from that tissue, even if my anonymity would be protected?
  • With gametes (and thus, potentially, children) being produced from adult stem cells, the barriers to parenthood are dramatically lowered. Conceivably, the very elderly or very young, as well as the deceased (who have donated tissue), could become parents. What policies should we develop to deal with these possibilities?
  • It will be easier than ever for parents to select for particular traits in their children. This raises important questions about the moral constraints on such choices. For instance, are parents morally obliged to create the "best" possible child? There are also related legal questions. Should children be able to bring "wrongful life" suits on their parents for bringing them into the world with a severe disability? 
These may be some of the most important issues in applied ethics and public policy in the coming decades.

Sep 26, 2010

Reasons for Love and Particularism

Jack loves Jill, and we'll assume that he has some reasons for doing so, reasons that are based on properties that Jill has in herself; she's smart, funny, likes climbing hills, etc. But now suppose Jill has a twin sister, Jane, who shares all the intrinsic properties that Jill has, and so shares all the properties that Jack takes as reasons to love Jill. Now it looks like Jack has as much reason to love Jane as he has to love Jill. And he'd have more reason to love Jane than Jill if Jane were to become, for example, smarter than Jill. [1]

This 'twin paradox' is an odd result. The rationality of loving someone should be relatively stable in the face of changes outside of the relationship, such as facts about who else shares some of the properties of the beloved. (This is especially clear in cases of familial love, though I'm focusing here on romantic love.) We are led, it seems, to have to reject the view that the reasons for love are intrinsic properties of the beloved, on pains of admitting an unacceptable instability to our loving relationships.

One way to avoid this result is to refine our conception of how the properties of the beloved give rise to reasons to love. We might propose that Jill's beauty, for example, gives Jack a reason to love Jill, while Jane's beauty does not. How could this be? One explanation would be particularism or holism about reasons for love. On such a view, a property that Jill has may be a reason for Jack to love Jill, whereas if Jane has that property, it would not provide Jack with a reason to love Jane. There is something about the beauty being Jill's that matters for Jack's reasons; it's not beauty itself, which happens to be instantiated in Jill, that gives Jack a reason, but rather Jill's beauty. 

If we're attracted to particularism about practical reasons in general, we might find this a plausible move. There's another tack we can take here, however.

We can grant that the intrinsic properties of someone provide reasons to enter a romantic relationship with someone, provided that one is not in a romantic relationship. But things might be more complicated when someone is already in a relationship. While Jack is in a relationship with Jill, for example, I'm supposing that Jill's beauty gives Jack reason to love Jill. What about Jane's beauty? Here's a proposal: In virtue of Jack's being in a relationship with Jill, Jane's intrinsic properties ought not give Jack any reason to love Jane (in a romantic way). This proposal is different from the particularist reply because we're not requiring a particularist construal of the reasons for love. Instead, we're considering that relationships put normative constraints on how we take things to be reasons. We can put it this way: being in an exclusive relationship means excluding the intrinsic properties of third parties from being reasons for love. 

We saw that if we take a property view of reasons for love, the 'twin paradox' results. I proposed two solutions to this problem on behalf of the proponent of the property view, one giving a particularist account of the reasons, and one focusing on the normative demands of romantic relationships.

I'm just getting started in some of this literature, so what I'm saying here is very tentative. I don't yet know whether I want to endorse a property view, but I thought it worth pointing out some dialectical strategies for the property theorist that seem to have some merit.



[1] The example is based off a similar scenario in Niko Kolodny's "Love as Valuing a Relationship."

Sep 21, 2010

Should the government try to undermine conspiracy theories?

Here's a question in applied political philosophy. Would the U.S. government be justified in "cognitively infiltrating" 9-11 conspiracy theory circles in order to undermine the credibility of those theories? Cass Sunstein and Adrian Vermeule think so, arguing in the Journal of Political Philosophy last year for "developing and disseminating arguments against conspiracy theories, governments hiring others to develop and disseminate arguments against conspiracy theories and governments encouraging others informally to develop and disseminate arguments against conspiracy theories." This might involve entering "chat rooms and online social networks to raise doubts about conspiracy theories and generally introduce ‘cognitive diversity’ into those chat rooms and social networks."[1]

Steve Clarke is right to point out that these measures are unlikely to have the intended result, and may even backfire. But that strikes me as besides the point. Even if we knew it would have great results, it still seems like we ought to be suspicious of this kind of government action. For one thing, while I have no sympathy for any of these conspiracy theories, it's not as if it's a crime, or even particularly subversive with respect to law and order, to believe some absurd theory about 9-11. So there's clearly not a national security justification here, like there would be if we were talking about spying on an Al Qaeda cell, for instance. What other good justification could there be here?

Also, a good way to think about this is to consider other possible targets for cognitive infiltration. For example, suppose it came to light that in the run-up to the U.S. invasion of Iraq the Bush administration was cognitively infiltrating anti-war groups; or suppose the Obama administration was secretly hiring people to write blogs aimed at destroying (what's left of) the reputation of Sarah Palin, or some other GOP figure. While liberals might be more upset at the former, and conservatives at the latter, clearly almost everyone would be upset about one or the other. What principled basis could we have for thinking that targeting conspiracy theories is fine, but that it's not fine to target other kinds of social or political beliefs? I think the obvious answer is: none.

[1] These quotes are from Steve Clarke's summary at Practical Ethics.

Sep 20, 2010

Sinnot-Armstrong on Divine Command Theory

Walter Sinnot-Armstrong argues that if divine command theory is true, we can't know something is wrong unless we know that it has been forbidden by God. Matt Flanagan objects to this. On the divine command theory of, say, Robert Adams, wrongness is constituted by being forbidden by God, just as water is constituted by H2O. So, just as we can know that something is water without knowing that it is H20, we can know that something is wrong without knowing that it is forbidden by God. I think this is on the right track, but there are some issues I'm not yet clear about. 

I wonder if the Sinnot-Armstrong worry reappears if we flesh out the divine command theory a bit. Suppose wrongness is constituted by being forbidden by God. It might be that not every token wrong action is such that its wrongness is directly forbidden by God; rather, we can imagine a DCT for which the wrongness of some basic action types is forbidden by God, and the wrongness of token actions of each type follows from, but is not constituted by, the wrongness of the basic action types. On this kind of view, it looks like Sinnot-Armstrong’s objection goes through, since the wrongness of (say) stealing will not be constituted by being forbidden by God; rather, it will be more indirectly forbidden. I’m not sure yet whether this kind of view is plausible, but maybe it’s something like what Sinnot-Armstrong had in mind when making his argument.

I also wonder if the worry arises at the level of second-order knowledge, i.e. knowing that we know a moral truth. Water is constituted by H20, but we can know all kinds of things about water without knowing that it’s H20. Similarly, wrongness is constituted by being forbidden by God, but we can know all kinds of things about what is wrong without knowing about God’s commandments. Now, here’s the worry I have in mind. Can we know that we know that stealing is wrong? You might think this requires more than just knowing that stealing is wrong. It might even require knowing that wrongness is constituted by being commanded by God. If it does, then Sinnot-Armstrong’s worry reappears. This might commit you to thinking that we can’t know that we know anything about water unless we know that it’s H2O; I’m not sure yet that that’s a bad result.

Finally, and perhaps most forcefully: even if our knowledge that this or that action were safe from Sinnot-Armstrong’s objection, it still seems that we couldn’t have knowledge why an action is wrong at the most basic level unless we knew that it was forbidden by God. Is that bad? Well, it certainly is if knowing that the action is forbidden by God is an important part of the reason for not doing it. Divine command theorists need our reasons for acting to be secure in the face of uncertainty about divine commands, as well as our knowledge of which actions are right and wrong. 

Is our world better off without carnivores?

Jeff McMahan has a nice piece in the NYT philosophy blog arguing that we’d have a better world if could eliminate (in some suitably humane fashion, like clever genetic engineering) the carnivores. I’ve heard arguments like this before, from non-philosophers, but this is the first time I’ve heard a philosopher making it. I think the line of thought has some intuitive appeal. If eating meat is morally bad because of the animal suffering that goes along with it, then isn’t that true whether or not it’s humans doing the eating? Isn’t a lion eating a gazelle morally on par with my eating a hamburger, since both involve causing an animal to suffer? If we grant this parity, moral arguments against vegetarianism amount to moral arguments against the existence of carnivores. We should take steps to reduce meat-eating by humans, as well as by non-human animals, because doing so means reducing animal suffering. That, anyway, is the line of argument McMahan is proposing.

I’m not quite sure that the argument goes through. It seems to require as a premise that a world with no carnivores at all contains less animal suffering than a world that has non-human carnivores but no human carnivores. That is, eliminating carnivores from the animal kingdom would result in a world with less animal suffering.

I find this implausible. There isn’t much difference in suffering, and thus, in moral badness, between an herbivore being eaten by a carnivore, and an herbivore being out-competed by another herbivore, or dying a natural death due to disease, injury, etc. In fact, we might think there’s less suffering involved in being eaten than in slowly starving by being out-competed, or in freezing to death in a cold winter. And, if the suffering from predation isn’t greater than the suffering without predation for a given animal, then McMahan has to show that fewer animals suffer in a world without predation. But that seems false, given plausible assumptions about population dynamics.

Let me end on a positive note. McMahan’s piece is rich and challenging, raising a lot of important issues. He questions, for example, whether ‘species’ is a morally relevant category: “The claim that existing animal species are sacred or irreplaceable is subverted by the moral irrelevance of the criteria for individuating animal species.” With this, I agree; we talk about animal species as a convenient way of carving up nature and organizing our biological knowledge, but it doesn’t seem terribly important to morality. (This isn’t to deny anything about human exceptionalism; I would think that if anything makes humans special morally, it’s not something that depends essentially on our species membership. That is, you can still think humans have intrinsic dignity even if you don’t think that that dignity depends on our being a member of Homo sapiens.) I heartily recommend reading McMahan's rewarding article. 

Sep 19, 2010

Charles Taylor on Irreducibly Social Goods

Charles Taylor has a neat essay called "Irreducibly Social Goods." [1] He wants to know whether there are any irreducibly social goods, i.e. social goods (e.g. friendship) that cannot be decomposed into what's good for individuals. His argument that there are appeals to Wittgensteinian considerations about the 'background,' the implicit array of meanings that makes possible our language practices. For Taylor, I can think of myself as sophisticated, whereas that thought simply isn't available to medieval samurai, due to deep differences in our cultural backgrounds. Without a background of the relevant sort, certain thoughts are impossible. The same holds for values. On Taylor's view, being in a certain culture makes it possible for certain things to be values. Sophistication can be a value for us, though it can't be for the samurai, according to Taylor.

Taylor then wants to apply this Wittgensteinian point about values to the question of irreducible social goods. The argument goes like this. Suppose a culture makes possible certain individual goods. It will follow, all else equal, that the culture that makes those goods possible is a good. But what kind of good is it? It is closely linked to various individual goods, as we've seen. But this linkage isn't causal; the culture isn't one among many things that could have brought it about that the individual goods exist. Rather, those goods are unintelligible apart from the culture. Because of this intricate connection between the culture and the goods it makes intelligible, Taylor thinks that the culture is an intrinsic, rather than merely instrumental, good. Further, the culture is an irreducibly social good, because there is in principle no way to make a value intelligible just to some individuals in the culture and not to others. (Cf. Wittgenstein's rejection of private language.)

This is an intriguing argument, but I don't think it quite works. For one thing, why should it be that whatever makes it possible for something to be understood as a good is itself a good? As far as I can tell, Taylor doesn't answer this question, and it's certainly not obvious why we would want to answer in the affirmative. It's not clear that it's relevant, for example, whether the making possible is merely causal or not.

Another worry is that Taylor's argument equivocates about the concept of cultural background. We can agree, for the sake of argument, that the connection between there being a cultural background at all and there being values at all is essential, in a non-causal sense; but it doesn't follow that the connection between any particular cultural backgrounds and the particular values it makes possible is also a non-causal connection. For presumably any number of different cultures can provide a background against which the values that are intelligible in our culture would also be intelligible. We need a way to individuate cultural backgrounds to spell out this worry in detail, but it at least seems like there might be an equivocation here.

Taylor has a second argument later in the article, but it's geared toward a somewhat different set of social goods. So, it's pretty important for his overall purpose whether this argument in fact succeeds.

[1] Reprinted in Charles Taylor, Philosophical Arguments.

Cheating and Deception in Sports

This is like picking low-hanging fruit, of course, but I can't help commenting on this piece by Bruce Weber in the Saturday NYT concerning cheating in sports. Apparently Yankees shortstop Derek Jeter faked getting hit by a pitch. Though the ball only impacted the bottom of his bat, he dramatically acted as if the ball hit his hand. Bruce Weber wants to argue that it's puritanical to find something objectionable, morally or otherwise, about Jeter's histrionics; unfortunately he strikes out when it comes to giving good reasons for his view.

The proposal is that we can distinguish licit deception from cheating proper in terms of spontaneity. If a player is responding spontaneously to the situation of the game, like Jeter supposedly was, that's fine; if the action is more deliberate and planned, that's more likely to be cheating. But why should degree of spontaneity have anything to do with whether pretending to get hit by a pitch, or taking a fall in soccer, is to be considered cheating? Typically, how premeditated an action is will matter only for the blameworthiness of an actor, not for the rightness of the action; here, it's supposed to matter for rightness, and that's just perplexing.

Weber also claims that deception is inherent to competition. But that's obviously false. How could I deceive my opponent in tic-tac-toe, for example? And even when a game does involve some deception- like chess, arguably- there's a difference between deception that's built into the structure of the game, and deception that is outside that structure. (For example, it would be objectionable to promise not to look at your opponent's Scrabble board while he grabs a sandwich, but then sneak a peak. It's simply not part of the game of Scrabble to look at your opponent's board!)

I think it's difficult to spell out what a good view of this question is, but I'd suggest a few basic points to start with.

1. We respect honest players more than those who capitalize on opportunities to gain through dishonesty. Moreover, we ought to do so.
Weber disagrees, saying that we don't expect players to argue calls that it's not in their interest to reverse, even if they know they were wrongly made. I guess he doesn't expect much of athletes morally; that's a shame.

2. What counts as cheating is going to depend on specific facts about the sport. (Weber agrees with this.) In American football, for example, a quarterback's hard count is designed to trick the defense into rushing before the snap, and receiving an offsides penalty. I would say this is fine, perhaps partly because this practice is common knowledge, so it's not as deeply deceptive as feigning injury. What's common knowledge among participants in an activity will vary depending on what the activity is; it's different in baseball than soccer, and might be different across token games of the same sport.
(Common knowledge has a technical meaning in philosophy. It's common knowledge that something is the case if everyone knows it, and everyone knows that everyone knows it, and so on.)

3. This is a bit more adventurous: what it means to be a good x while y-ing is sensitive to what it means to be a good x. So, what it means to be a good person while playing some particular sport is sensitive to what it means to be a good person. I don't know if we can say much in general about the relation between being a good person and being a good athlete, because of point (2). But Weber's article tends to deny any kind of relationship here, which I think is dangerous. The demands of morality and propriety don't vanish when we step on the field.

4. This gets pretty messy, since it involves a doing/allowing distinction, but here goes: There's a difference between actively deceiving an umpire (like Jeter seems to have done), and merely failing to correct the mistakes of an umpire. Weber disagrees with this, but doesn't give any reason for doing so.

That's all I want to say for now. My hunch is that there are interesting parallels between the right account of cheating in sports and the right account of lying in conversation.

Sep 14, 2010

A puzzle for moral norms governing thought

It's tempting to think that morality concerns not only actions- donating to charity, for example- but also mental states like intentions, beliefs, desires, and emotions. Applying moral norms to mental states, however, raises some interesting problems.

A commonly discussed problem is whether we have the kind of control over our mental states that would justify thinking that they could be subject to moral norms. This is a version of 'ought implies can': if we ought to have this or that mental state, it must be the case that we can bring it about that we have that state, or so one might think.

But there's another problem that's less commonly discussed. It's plausible that if I care about following a moral norm, I will try to check from time to time whether I am in fact following it: a conscientious moral agent monitors his own adherence to moral norms. The problem, though, is that for moral norms governing our thoughts, self-monitoring can be counter productive. For instance, suppose there's a norm that I ought not think about cookies while on a diet. To monitor my adherence to this norm, I might think, "Ah, now let me see- have I thought about cookies lately?" And this very question may either itself involve thinking about cookies, or, as an empirical matter, very likely lead me to think about cookies. (For a more serious example, this one from Christian ethics, we might consider the claim that "every one who looks at a woman lustfully has already committed adultery with her in his heart." [1]) So, we see how self-monitoring can be counter-productive. And if it is deeply counter-productive, it would seem that the morally conscientious agent should not engage in this kind of self-monitoring, at least not in a rather wide range of cases. That's a surprising result.

There's a lot going on here. We need to spell out what moral norms on thought content might look like in more detail. And we need an account of the content of a thought. These are tall orders. At the very least, though, I think we've seen that self-monitoring of mental content raises important issues for any moral theory that proposes to bring our mental states into the moral domain.

[1] Matthew 5:28, RSV

Welcome!

The title of this blog is a nod to Boethius, a Roman philosopher of the 5th and 6th centuries. Lady Philosophy has her consolation, to be sure, but never without consternation.

I will use this space to consider any philosophical topics I find interesting at the time; usually not at the level of depth of a seminar paper or similar, but I will try to show why one might find the question interesting and important. Given my own interests, the posts will tend to focus on questions in moral philosophy, including meta-ethics, normative ethics, and applied ethics.