An Islamic Sharia law court has been established in Antwerp, the second-largest city in Belgium.

The Sharia court is the initiative of a radical Muslim group called Sharia4Belgium. Leaders of the group say the purpose of the court is to create a parallel Islamic legal system in Belgium in order to challenge the state’s authority as enforcer of the civil law protections guaranteed by the Belgian constitution.

The Sharia court, which is located in Antwerp’s Borgerhout district, is “mediating” family law disputes for Muslim immigrants in Belgium.

The self-appointed Muslim judges running the court are applying Islamic law, rather than the secular Belgian Family Law system, to resolve disputes involving questions of marriage and divorce, child custody and child support, as well as all inheritance-related matters.

The tell-tale phrase here is self-appointed. In what kind of world should we care what self-appointed judges think is fair? Only when those self-appointed judges leave us no other choice.

People vary in their beliefs about God, Heaven, and so on. Why is this? Surely not because they have had different experiences with respect to God, Heaven, and so on.

Usually, different beliefs imply different experiences. Ava lives in Trenton and works in Manhattan; she believes that commuting to work is the worst lifestyle ever invented. Ben lives in Hilo, and bikes a couple miles to work; he believes that commuting to work is kind of fun. So these are beliefs based on experience. But if Ava tells us that God is the personification of kindness, while Ben describes him as terrifyingly judgmental, this cannot be because they have had differing experiences of God. Neither person has ever talked to God or seen God doing anything. So, what can be the reason for their different opinions? Only that they have been told different things about God. Their beliefs are based not on experience but only on rote learning. And this learning is of course not about God (since there’s no such thing), but only about what you’re supposed to say about God.

A belief about what you’re supposed to say about God is not the same thing as a belief about God. So the truth is that, although Ava and Ben offer us different verbal descriptions of God, they do not have different beliefs about God. They don’t have any beliefs about God; they’ve just learned to recite different slogans.

I am serious.

You cannot mean a statement that you do not understand. Religious statements cannot be understood. Therefore, people who make religious statements do not mean them.

Take the sentence, “God is perfect goodness.” No one really understands this. Which means that when they say it, they don’t mean it. Which is the same as saying that they don’t believe it.

Imagine saying it to yourself, and then asking yourself, “Do you agree?” If you don’t know what it means, you can’t honestly agree with it. And if you can’t honestly agree with it, then you don’t believe it.

Someone might say, “Well, I don’t understand it, but greater minds than mine do understand it, and I trust those people to tell me the truth.” Putting aside the question of why you trust these authorities, let me remind you that very few priests or theologians claim to understand the nature of God. In fact, such people are practically unanimous that such understanding is not accessible to mere mortals, including themselves. It has always been so. But this means that, when they say “God is perfect goodness,” they do not understand it, and therefore they do not believe it. Like everyone else in the business, they are merely repeating a string of words that neither they nor anyone else has ever truly believed.

People frequently profess beliefs about what God wants them to do or not to do. But then they don’t always follow God’s recommendations – which is pretty strange. Wouldn’t you figure that if a person really believes in God, and really believes that God wants them to do something, that they would absolutely do it (or die trying)?

Conceptually, let’s divide all the beliefs that people can have into two types. Type A beliefs have an observable effect on the believer’s behavior. Type B beliefs do not.

Now that type B is identified, if can immediately be ignored. If it doesn’t affect what you do, it’s not sociologically interesting. Type B beliefs might include, for example, the Catholic notion of the Trinity. Whether you believe that God is “One” or “Three” or “pi” is not going to make a detectable difference in anything you do (except for trivial cases such as what you’re likely to say when asked about the number of God). The vast majority of propositions about God fall into this category. What you believe about the nature of the deity will make little or no difference to the rest of us. In fact, this may be a misuse of the word “belief.” We don’t see you believing stuff so much as just saying stuff.

Type A beliefs are more interesting. They make a difference. For example, if you believe that God does not want you to eat pork, you probably won’t eat pork.

But it’s not as simple as that.

First, many of the beliefs that supposedly constitute religious participation are not followed. Catholics, by definition, “believe” that contraception is wrong. But most of them use it anyway; therefore, they don’t actually believe that it’s wrong. So this would be a type B belief – that is, not a belief at all.

Still, many people really don’t eat pork, for “religious reasons.” But what does this mean? If we ask that person why he doesn’t eat pork, he says (approximately) “Because I believe that God doesn’t want me to.” But it is well established that people can be wrong – sometimes very, very wrong – about their own reasons for doing things. So, let’s take a look at this person’s environment. He is surrounded by people who are telling him that God does not want him to eat pork. That is, eating pork is frowned on by his community. The people around him are telling him that it’s not OK to eat pork, and if he eats pork he is going to have to answer to them.

Which is more likely: that I don’t eat pork because I believe that it is deprecated by God, or that I don’t eat pork because I know full well that it is deprecated by my friends, neighbors, family, and local law enforcement? Which is more likely: that I worry about God punishing me in some unimaginable way, or that I worry about people punishing me, in ways I can imagine all too well?

So, the real reason that some people don’t eat pork is obvious: social pressure.

The idea that people have religious beliefs that dictate their behavior falls apart in at least three ways. First, some beliefs are too abstract to affect any practical decision anyone makes. Second, some rules do not affect people’s behavior because they are not followed. And my point today is that the behavior is probably caused by something other than the belief.

Even if some people say that they believe that God doesn’t want them to eat pork, and if those same people indeed do not eat pork, it still could be—in fact, it’s very likely—that this belief is not the reason that they don’t eat pork. Therefore, it’s a type B belief: the kind that doesn’t affect behavior. The kind that’s not really a belief at all.

If you doubt my characterization of President Bush as “delighted” by the events of September 11th, 2001, take a look at the speech he delivered three days later as part of a two-hour National Prayer Service. During these eight minutes he looks like the happiest man in the world. He’s practically giggling with delight.

[This is an updated version of a post I made five years ago.]

Ten years ago a terrorist organization executed an attack inside the United States. They sent hijacked airliners crashing into two colossal office towers in downtown Manhattan; another plane hit the Pentagon building; a fourth, possibly on its way to Washington, D.C., crashed in Pennsylvania. Everyone on all four planes died (including, of course, the hijackers). The World Trade Center towers were utterly destroyed. The Pentagon was damaged. About three thousand people died.

The contemptible men who planned this attack wanted to be seen as having injured, not “just” thousands of people, not “just” several extremely expensive buildings, but the United States of America, or even Western civilization. Of course such megalomania is ludicrous. Oh, it was a tragedy. Absolutely. But you cannot bring down these United States with such a feeble gesture. This is a country of 300 million people. We have by a huge margin the world’s most robust economy and the world’s most powerful military. Considered as an act of war, 9/11 was not significant.

We could have said to those self-styled revolutionaries: Fuck you. You could do a thing like that every month and we would still survive it. You cannot really hurt us. But you have made us angry, and we are going to find you, and we are going to wipe you out. A few years from now, no one will even remember your names.

That’s what we should have said. It would have been proud and brave. And it would have limited the hurt. It would have made 9/11 a small wound, painful but small, like the welt from a rubber bullet, that stings like a motherfucker but will certainly heal by itself.

But that did not happen. Luckily for the terrorists, the 9/11 attacks became an excuse for our elected leaders to impose their private agenda on this country. The Bush administration moved quickly and delightedly to turn our fear to their advantage. And thus the 9/11 attacks became, at the hands of those filthy, despicable traitors, a devastating attack on this country.

The terrorists wanted to hurt us deeply and permanently. Their plan should not have worked, would not have harmed us terribly, except that 9/11 was just the Republicans needed for their own plans. Bush wanted the same thing bin Laden did:  a world ruled by the wealthiest, the most violent, the most religious, the biggest liars. Bush’s wars in Iraq and Afghanistan have killed more than twice as many Americans as did the 9/11 attacks – not to mention hundreds of thousands of Iraqis and Afghans. They also cost us over three trillion dollars, which crippled the economy, causing much even more death and suffering.

President Obama, seemingly a very different kind of person, could have ended these wars. Yet, he chose to prolong them.

There is nothing to celebrate here, nothing to be proud of, nothing to commemorate. The sad, ugly story should have been over in a few years – but there is reason to fear that 9/11 was a tipping point from which we may never recover. Now we have a Department of Homeland Security (just as the Nazis did), and no politician dares vote to dissolve it. Now we have indefinite detention, in hideous prisons where they hold innocent people for no reason and torture them just for fun, and the president says he will close them, and he does not close them, and no one is even surprised. This is the legacy of 9/11, which for this immense and powerful country should have been nothing but a sting, but instead is crushing its very soul, because that’s what the men who hold the power want.

So you’re traveling, and in your motel room you discover a hard-cover Bible, deposited there by the obnoxious “Gideons”. Presumably the Gideons don’t go room to room putting Bibles in drawers. I reckon they go to the motel manager and make him an offer he can’t refuse.

Anyway, what should a weary traveler do with that darned thing? You can’t just leave it there like a ticking bomb waiting to blow away innocent people’s rational minds. A couple times I did this: tear the covers off and throw it in the trash. But then I thought: Some poor cleaner is going to see this and have a heart attack.

Now I just put it in my suitcase. When I get home, it goes in the recycling bin. I’m all about saving the planet, you know.

In Darwin’s Cathedral: Evolution, Religion, and the Nature of Society (2002), David Sloan Wilson presents a case for the utility of religion. He claims that religious beliefs and practices arose and are maintained in human societies because such beliefs and practices are adaptive.

Something as elaborate—as time-, energy-, and thought-consuming—as religion would not exist if it didn’t have secular utility. Religions exist primarily for people to achieve together what they cannot achieve alone. The mechanisms that enable religious groups to function as adaptive units include the very beliefs and practices that make religion appear enigmatic to so many people who stand outside them. (p. 159–160)

Wilson’s argument depends on a controversial version of Darwinian natural selection, operating at the level of groups. He calls it multilevel selection, and quotes Darwin himself to vouch for its applicability to human cultural practices.

It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an advancement in the standard of morality and an increase in the number of well-endowed men will certainly give an immense advantage to one tribe over another. There can be no doubt that a tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedience, courage, and sympathy, were always ready to give aid to each other and to sacrifice themselves for the common good, would be victorious over most other tribes; and this would be natural selection. At all times throughout the world tribes have supplanted other tribes; and as morality is one element in their success, the standard of morality and the number of well-endowed men will thus everywhere tend to rise and increase.

Thus did Darwin speculate, in The Descent of Man (1871), that natural selection, which generally operates through the differential survival of individuals, could operate on the level of groups, with fitter groups out-competing the others. And in the same, widely quoted paragraph, he reckoned that one way for a group to boost its fitness is for its members to be good. Tribes with a “high standard of morality” will be “victorious” over other tribes and therefore “supplant” them. This implies that the proportion of moral to immoral tribes in the world will increase over time, because moral tribes have superior evolutionary fitness.

It’s an appealing idea, but is this prediction true? Do the good guys tend to out-compete the bad guys? Has the average standard of morality in the world’s “tribes” risen over the long term? To test Darwin’s model we would have to understand, even quantify, what he intends by the word morality. On careful reading, the paragraph seems much too vague for this. It runs together altruism (giving aid to others) and self-sacrifice (“for the common good”), which are very different impulses. It seeks the source of these noble impulses in “patriotism, fidelity, obedience, courage, and sympathy”: a motley collection of other noble impulses. (Are they all good? Which of them leads to which? Do we have any idea?) It speculates that the combination of those two (or seven) traits will cause the tribe to be “victorious” over other tribes. Is it really true that more “moral” communities tend to wipe out the less “moral” ones? Why? What is the mechanism? And what’s so moral about slaughtering your neighbors?

This all seems especially weak in the light of the theoretical work done a hundred years later by George C. Williams and John Maynard Smith, who showed mathematically that group selection can only work under implausibly constrained conditions.

Also note that Darwin’s account does not mention religion. Darwin himself was an atheist, and certainly did not take for granted the idea that religion is, in general, beneficial. By quoting this passage in the context of an argument for the utility of religion, Wilson is implicitly claiming that if we replaced Darwin’s phrase a high standard of morality with the word religion, the passage would still have the meaning that Darwin intended. Darwin’s model—actually, it’s just a thought experiment, not a model—predicts a positive role for religion, only if religion is responsible for the winning groups’ high standard of morality—that is, only if religion makes people good. To assume that it does is to assume what Wilson wants to convince us of. Thus, his argument is circular.

This isn’t the only problem with Wilson’s argument. There are lots more.

First. That a process is natural does not make it a good idea. Maybe religion does help some tribes wipe out other tribes. Is that good? Is that something we want? Only if we have an independent reason for preferring the religious tribes over the secular-humanist tribes. (I wonder what David Sloan Wilson’s independent reason is.)

Second. Even if communities with lots of morality (or religion or parochial altruism or whatever) tend to defeat communities with less of those things, this does not tell us whether the people in the winning communities are happy. It could be that when it comes to warfare the most effective organization is a totalitarian misery-state where only the people who are not in the army are happy, and only twelve guys are not in the army. There is a difference between a moral society and a happy society, especially if morality is taken to mean following the rules, as it is in so many places—especially the religious ones.

Third. In the most-religious communities in this world, one’s participation is not voluntary, it is required on penalty of expulsion or death. Wilson assumes a quasi-economic model where people are free to choose their affiliations, but religion is in direct conflict with such freedom. The more powerful the religion, the less choice its “adherents” have. If such conditions do not violate Wilson’s assumptions, do we even care whether his model is predictive?

Finally, the model does not distinguish between “religion” and any other kind of strongly normative social structure based on persuasive falsehoods. How does religion come into it? Where does the religion part of his hypothesis come from? I will tell you. First, through the assumption mentioned a moment ago: that people are free to choose; therefore, they choose the religion that most benefits them. (But when religion is in the picture, they are not free to choose.) Second, through the assumption that “belief in God” is probably, in general, a good thing; the assumption that religion fosters both social cohesion and positive morality. This is circular logic, assuming what was supposed to be proved.

The brutal fact, D.S. Wilson’s own “problem of evil” if you will, is that what we see in vivo is nothing like this. We don’t see religion bringing out the best in everyone, or drawing people together in joyful brotherhood. What we see is amoral, charismatic leaders who leverage specialized memes and raw violence to control large populations for selfish reasons. Such leaders benefit from religion; no one else does.

Seemingly unaware of religion’s well-known (and not yet ended!) history of violence and injustice, within groups as well as between them, David Sloan Wilson has carefully built a case for the idea that religion is a Good Thing; that it is Good because it brings folks together; and that it brings them together because it is Good. I’m sure his funders at the Templeton Foundation are delighted.

 

Last year I heard some atheists say that Christians are more charitable than secularists. My gut told me that this could not be accurate, so I investigated. This investigation resulted in two blog posts: The Myth of Christian Charity, part 1 and part 2. After these were published, Gregory Paul alerted me to a book published in 2006, Who Really Cares, in which Arthur C. Brooks makes extremely strong claims about religion and generosity. Late in 2010 there appeared another book, Robert Putnam and David Campbell’s American Grace, which makes similar claims. I decided that, since the findings in these high-profile books were supposedly based on the statistical analysis of large-scale survey data—that is, they looked like science—they should be rebutted (if they are false) in the scientific literature. I did a ton of research, verified that they are false, and wrote a paper, which is now under review by a scientific journal. While we wait to hear about the paper, here is a layperson’s summary.

***

Introduction

Religious representatives have always claimed that religion is a good thing, and that its many benefits include an improvement in morality. Religious people, they say, are kinder than the unchurched. Repetitions of this claim have embedded the phrase “Christian charity” in our language.

In recent years, professional scholars have reported finding empirical support for this traditional claim. Most prominent among these are Arthur C. Brooks (Who Really Cares? Who Gives, Who Doesn’t, and Why It Matters, 2006), and Robert Putnam and David Campbell (American Grace: How Religion Divides and Unites Us, 2010). In this post I will call these three men “the traditionalists.”

“When it comes to charity,” says Brooks, “America is two nations—one charitable, the other uncharitable”; compared to the non-religious, “religious people are, inarguably, more charitable in every measurable way.” (Emphasis in the original.) Putnam and Campbell vigorously agree:

Some Americans are more generous than others. … In particular, religiously observant Americans are more generous with time and treasure than demographically secular Americans. … The pattern is so robust that evidence of it can be found in virtually every major national survey of American religious and social behavior. Any way you slice it, religious people are simply more generous.

This would be an astonishing result, a stunning vindication for advocates of religion everywhere, if it were valid. But it is not.

Methodological Challenges

The traditional hypothesis is that religiosity fosters generosity. To support this claim scientifically, we would have to (1) measure many people’s religiosity and generosity, (2) show that, on average, those people who have more of the former also have more of the latter, and (3) show that the former causes the latter. (The claim is that being religious makes people generous, rather than that being generous makes people religious, or that some third factor causes the first two.) The traditionalists fail to accomplish all three of these goals.

The business of sociology depends almost exclusively on surveys. Rather than observing people’s thoughts and feelings—which is impossible—the sociologist surveys them about their thoughts and feelings. Behavior, too, is generally inquired about rather than observed. But there is a problem: survey respondents tend to give answers that are flattering rather than true. This is called social desirability bias. In any community, behaviors considered good will be over-reported, and those considered bad will be under-reported. The problem is especially severe with behaviors to which strong norms are attached. Being generous and being religious are ideal exemplars of this category.

“Generosity can be measured most simply by measuring gifts of time and money,” write Putnam and Campbell. But surveys do not measure such gifts—they measure reports of such gifts. And these reports are anonymous, unverified, and subject to strong social pressures.

For a measure of religiosity, most surveys use frequency of church attendance. That one goes to services regularly is easy to say, hard to verify, and subject to strong community norms. In decades of surveys, 40 percent of Americans have reliably reported going to church pretty much every Sunday. It turns out that about half of them are liars. In the 90s, scientists found ways to count how many people were really attending. The number is much closer to 20 percent than to 40 percent. (See for example C. Kirk Hadaway and Penny Long Marler, “How Many Americans Attend Worship Each Week? An Alternative Approach to Measurement,” Journal for the Scientific Study of Religion 44(3):307-322 [2005].)

The traditionalists cite page after page of statistics showing a strong positive correlation between religiosity and generosity. But this tells us nothing of interest, because both attributes are likely to be over-reported, and in the case of generosity we don’t know by how much. Neither book mentions or addresses this enormous methodological problem.

In the measurement of generosity a more technical problem appears. Throughout both these books (and in the sociology-of-religion literature generally), the words generosity and charity are used interchangeably (as synonyms for altruism, benevolence, compassion, and so on). But charity has an additional sense. In the U.S. tax code, and in standard English, a charity is a nonprofit corporation; donations to such organizations are also called charity—a term easily confused with generosity.

Note, however, that generosity is not the same thing as donating to a nonprofit organization. These are different concepts. The first means, voluntarily helping others at some cost to oneself. The second means, giving money to an organization that qualifies as “not for profit” under the U.S. tax code.

Donations to one’s own church are tax-deductible. But that does not make them charitable, in the older sense of the word. They are membership dues for a social club. They do not benefit the wider community, as would, for example, donations to the Red Cross. They certainly should not be used as a proxy for the noble attribute we call generosity. Yet, this is exactly what Brooks and Putnam-and-Campbell do. In these books, the words charity and generosity are used to mean people’s (self-reported) charitable donations, including money given to their own church. Thus the measurement of generosity, which was already distorted by social desirability bias, is further distorted by a confusion of terminology.

Another technical issue relates to the measurement of religiosity. Is church attendance a good proxy by which to measure how religious people are? Perhaps not, if people report twice as much of it as they should. What else might we use? We could try frequency of prayer, or of Bible study, or how “certain” one is about the existence of God. And all these would be self-reports—but there is deeper problem here. How could we tell which of these things is more appropriate? In other words, what is religiosity?

Well, it is a matter of opinion. To verify this, notice that for any behavior (or attitude or quality) one party chooses as the epitome of religiosity, another party can say, “But that’s not really being religious,” and name some other behavior (or attitude or quality). There is no independent standard to which such claims be compared. If a man says, “I am highly religious,” nothing anyone else might say can prove him wrong. Even if they point out that he has previously described himself as an atheist, he can still say, for example, “I attend my wife’s church, and act as a deacon at the Sunday school”—or, “I have a very spiritual attitude toward life.” And no one can prove that these facts are less important to his religiosity than whether he professes to believe in God.

But if there is no evidence that can prove that a person is not religious, this means that we do not have a working definition of religiosity. And this means that the concept of religiosity is not useful in scientific research.

One finding is unimpeachable. People who go to church often give more money to churches than do people who go to church less often. But there is all the difference in the world between this finding and the claim that “Any way you slice it, religious people are simply more generous.”

Behavioral Observations

I mentioned that almost all sociological studies are based on data from surveys. There have been a few studies on religion and behavior where actual behavior was observed. (Brooks and Putnam-and-Campbell mention none of them.)

In the 1973 experiment of John M. Darley and C. Daniel Batson, the subjects (all students at Princeton Theological Seminary) “encountered a shabbily dressed person slumped by the side of the road.” Some were on their way to give a talk on the parable of the Good Samaritan; others had been assigned a topic unrelated to generosity. Those who (presumably) had generosity on their minds were not more likely than the others to stop and offer help to the slumped-over person. Also uncorrelated with their helping responses was their religiosity, as measured by a previous interview.

Some of the subjects were told, “Oh, you’re late. They were expecting you a few minutes ago. We’d better get moving.” This hurry condition had a significant effect on the subjects’ behavior. The authors conclude:

A person not in a hurry may stop and offer help to a person in distress. A person in a hurry is likely to keep going. Ironically, he is likely to keep going even if he is hurrying to speak on the parable of the Good Samaritan, thus inadvertently confirming the point of the parable. (Indeed, on several occasions, a seminary student going to give his talk on the parable of the Good Samaritan literally stepped over the victim as he hurried on his way!)

In a 1975 experiment by Ronald E. Smith, Gregory Wheeler, and Edward Diener, students in a large introductory psychology class were given an opportunity to cheat on a class test. On another, apparently unrelated occasion, they were asked to volunteer to help out some developmentally disabled children. Meanwhile, also seemingly unconnected with these events, a questionnaire was used to measure the strength of their religious affiliations. On the basis of this questionnaire, the subjects were divided into four groups: “Jesus people” (a term current in the 1970s, and not considered derogatory), religious, nonreligious, and atheists. No differences in either the rate of cheating or the rate of volunteering were observed between the four groups.

In another experiment (Lawrence V. Annis, Psychological Reports, 1976) subjects completed a questionnaire designed to measure “degree of commitment to traditional tenets of Western religion,” “location of religious values in the individual’s hierarchy of values,” and “frequency of religious behaviors like church attendance and private prayer.” Later, with no apparent connection to the questionnaire, each subject “happened” to see a woman carrying a ladder. The woman went into another room and closed the door; a few moments later there was an audible crash, designed to sound as if the woman had perhaps climbed the ladder and then fallen off. The subject then either opened the door or did not. None of Annis’s three measures of religious commitment bore any correlation with the likelihood of a subject’s opening the door.

Conclusion

Scientists who have taken the traditional hypothesis seriously and tested it experimentally have come up empty-handed. No evidence has been found for the proposition that religiosity fosters generosity. And that is not surprising, when we consider that no one even knows what religiosity is. People who describe themselves as religious tend also to describe themselves as generous. But this relation does not obtain in their actual behavior toward other human beings.

The rules governing the behavior of religious members don’t come from the “holy scriptures” they read. After all, most of the rules in the scriptures are not followed (don’t wear mixed fabrics; turn the other cheek); while many important norms (monogamy, democracy, kindness) are never mentioned.

The rules of the religion do not come from the scriptures. They come from the priests—a parasitic caste of old men.

“Fundamentalists,” we are told, “believe that every word of scripture is the word of God.” Some observers (Sam Harris was perhaps the first) have said that this makes “fanatical” religious observers more sensible about their faith (in a way) than “moderates.” The moderates pick and choose which parts of the book to take seriously, while the fanatics simply believe every word. They don’t back down from the idea that the entire thing has a divine warrant. Fundamentalism seems to have a moral clarity about it that “reformed” religions lack.

But this turns out to be wrong.

Take a look at what the fundamentalists do. Their actions clearly show that they believe that some parts of holy scripture are the word of God, and that others are not. For example, although scripture clearly says that if someone strikes you on the cheek you should turn the other cheek, fundamentalists generally advocate an immediate escalation in violence. The idea that they believe every word is simply not true. Fundamentalists don’t get their rules from a literal reading of scripture. Like every other kind of religious “believer,” they take their orders from a parasitic caste of old men.