Over at Marginal Revolution, Tyler and Alex have of late been pondering the role of animals in the social calculus -- namely, the question of whether, and to what extent, animals matter.
Tyler believes they do, though he's not quite certain how to account for them:
Surely it seems reasonable to count the welfare of animals -- or at least selected high-cognition animals -- for something rather than nothing. But this throws moral calculations into a funk. Even if you count individual animals for very little, there are many billions of them.
For my part, I'm less than troubled by my own specieist instincts. The reason, I suppose, is that, from a social contractarian point of view, animals just generally aren't parties to the negotiation. Besides their unfamiliarity with the language and the lack of opposable thumbs making signatures difficult to obtain, I just don't trust their sense of reciprocity, either with us, or with one another. I love and am eternally fascinated by bears, but the experience of those unique souls who have chosen to commune with them suggests they aren't prone to hold up their end of the bargain. Nor do they particularly trust you to hold up yours.
Which is not to say that some form of reciprocity is impossible to maintain with an animal, but there remains in such relationships a significant assymetry to the balance of power. Master provides Fido with Puppy Chow, health care, and warm doghouse. Fido provides master with his slippers, a frisbee partner, and a babe magnet down at the local walking trail. Despite the language barrier, both seem to implicitly proceed from the Schelling point that outward aggression on either's part will jeopardize the terms of this arrangement as a going concern.
Yet, despite this micro relationship, Fido has no place in the macro compact among humans to which his master belongs, not even the 3/5ths of a person regarded under chattel slavery. Perhaps this is inequitible, but I don't find it at all troubling, and don't share Tyler's desire to justify a place for Fido at the table. Down, boy...this is for people.
Alex, on the other hand -- betraying his background as a teenage Randroid -- thinks Tyler's problem is his unwillingness to throw caution to the wind and allow the white wings of REASON to whisk him away to whatever logical conclusion lay waiting at the end of the Rationality Rainbow, no matter how intuitively ludicrous it might seem:
Tyler wants to find a theory that both rationalizes and is consistent with our intuitions. But that is a fool's game. Our intuitions are inconsistent. Our moral intuitions are heuristics produced by blind evolution operating in a world totally different than our own. Why would we expect them to be consistent? Our intuitions provide no more guidance to sound ethics than our tastes provide guidance to sound nutrition. (Which is to say, they are not without function but don't expect to be healthy on a yummy diet of sugar and fat.)
The reason to think deeply about ethical matters is the same reason we should think deeply about nutrition - so that we can overcome our intuitions. Tyler argues that we don't have a good approach to animal welfare only because he is not willing to give up on intuition.
Tyler asks (I paraphrase) 'Would you kill your good friend for the lives of a million cats? What about a billion cats?' He answers, No, but says "Yet I still wish to count cats for something positive."
My answer is not only Yes it is that we do this routinely today. The introduction of "your good friend" (or "children" in Larry's example) engages our primitive intuitions and feelings and that is why Tyler's answer goes awry. But consider, last year Americans spent more than 34 billion dollars on their pets. That money could have saved human lives had it gone to starving Africans.
I think Alex is wrong on several counts, but more importantly, I think he's copping out on directly addressing Tyler's inquiry. Alex says that many people already value animals -- at least, they value the animals they own -- and demonstrate their preferences by spending money on those animals' upkeep. This is true, but unremarkable. Many people also value their homes, and similarly spend many billions not only purchasing, but improving and maintaining those homes. Any broadly utilitarian set of ethics would have no trouble incorporating both sets of human preferences, and acknowledging that people value some things more than the well-being of their fellow humans. In fact, people often seem to value most things more than they do the well-being of their fellow humans.
But it strikes me this was not at all what Tyler was on about, as he seems to have no concomitant concern about how to account for the welfare of one's home. He seems to proceed from the notion that animals have intrinsic value, not just the subjective value different people ascribe to them. Or, perhaps, he is just unwilling to accept a utilitarian approach to moral theory, in which respect he is hardly alone, though I don't see why the issue of animal welfare should confound the matter any more than a host of other concerns.
I also think Alex's comparison of moral instincts to nutritional instincts is faulty. It is not that our predisposition to crave sugar and fat is wrong, but that it isn't perfectly applicable to all scenarios in which modern man often finds himself -- namely, inhabiting a world in which sugar and fat are far more plentiful than our ancestors found them to be, while many other nutrients are necessary to sustain long-term health. But if one's evolutionary predisposition to feel varying degrees of empathy toward concentric circles consisting of kin, tribe, race and species is similarly ill-suited to the particulars of modern life, then pure reason would seem to suggest abandoning such empathetic inclinations wherever practical. Sociopaths everywhere are vindicated.
What Alex wants to say is that, where moral reasoning and moral intuition prove incompatible, we should abandon moral intution and stick with moral reasoning, much as reason forced us to abandon the more intuitive Newtonian framework for the strange world of relativity, and even stranger world of quantum mechanics. But that argument is valid only if moral concepts, like the concepts of physics, have empirical truth value. If they do not -- and I remain, well, skeptical -- then attempting to apply reason to move past intuition toward a fuller elaboration of moral theory amounts to an updated version of theological debates about how many angels can dance on the head of a pin. It's spinning the straw of intuition into golden nonsense.
I agree with Alex on this much: our moral intuitions are certainly inconsistent, and frequently incompatible. As evolutionary adaptations, they've proved a stable enough paradigm to ensure the survival of the species, but that stability obviously has its limits. Struggling to reconcile incompatible intuitions raises the costs of social bargaining considerably. But by moving away from those intuitions toward abstractions that few would accept, Alex misses the main value that moral intuitions still hold -- not that they're true, but that, at the very least, they're common.