Santa’s magic, children’s wisdom, and inequality (a timeless holiday classic essay!)

This is a preprint version of an essay in Enduring Bonds: Inequality, Marriage, Parenting, and Everything Else That Makes Families Great and Terrible, by Philip N. Cohen. Oakland, California: University of California Press. It is revised from previous essays about Santa. Read this one instead.

Eric Kaplan, channeling Francis Pharcellus Church, writes in favor of Santa Claus in the New York Times. The Church argument, written in 1897, is that (a) you can’t prove there is no Santa, so agnosticism is the strongest possible objection, and (b) Santa enriches our lives and promotes non-rationalized gift-giving, “so we might as well believe in him” (1). It’s a very common argument, identical to one employed against atheists in favor of belief in God, but more charming and whimsical when directed at killjoy Santa-deniers.

All harmless fun and existential comfort-food. But we have two problems that the Santa situation may exacerbate. First is science denial. And second is inequality. So, consider this an attempted joyicide.


From Pew Research comes this Christmas news:

“In total, 65% of U.S. adults believe that all of these aspects of the Christmas story – the virgin birth, the journey of the magi, the angel’s announcement to the shepherds and the manger story – reflect events that actually happened” (2).

On some specific items, the scores were even higher. The poll found 73% of Americans believe that Jesus was born to a virgin mother – a belief even shared by 60% of college graduates. (Among Catholics agreement was 86%, among Evangelical Protestants, 96%.)

So the Santa situation is not an isolated question. We’re talking about a population with a very strong tendency to express literal belief in fantastical accounts. This Christmas story may be the soft leading edge of a more hardcore Christian fundamentalism. For the past 20 years, the General Social Survey (GSS) has found that a third of American adults agrees with the statement, “The Bible is the actual word of God and is to be taken literally, word for word,” versus two other options: “The Bible is the inspired word of God but not everything in it should be taken literally, word for word”; and, “The Bible is an ancient book of fables, legends, history, and moral precepts recorded by men.” (The “actual word of God” people are less numerous than the virgin-birth believers, but they’re related.)

Using the GSS, I analyzed people’s social attitudes according to their view of the Bible for the years 2010-2014 (see Figure 9). Controlling for their sex, age, race, education, and the year of the survey, those with more literal interpretations of the Bible are much more likely than the rest of the population to:

  • Oppose marriage rights for homosexuals
  • Agree that “people worry too much about human progress harming the environment”
  • Agree that “It is much better for everyone involved if the man is the achiever outside the home and the woman takes care of the home and family”

In addition, among non-Hispanic Whites, the literal-Bible people are more likely to rank Blacks as more lazy than hardworking, and to believe that Blacks “just don’t have the motivation or willpower to pull themselves up out of poverty” (3).

This isn’t the direction I’d like to push our culture. Of course, teaching children to believe in Santa doesn’t necessarily create “actual word of God” fundamentalists – but there’s some relationship there.

Children’s ways of knowing

Margaret Mead in 1932 reported on the notion that young children not only know less, but know differently, than adults, in a way that parallels the evolution of society over time. Children were thought to be “more closely related to the thought of the savage than to the thought of the civilized man,” with animism in “primitive” societies being similar to the spontaneous thought of young children. This goes along with the idea that believing in Santa is indicative of a state of innocence (4). In pursuit of empirical confirmation of the universality of childhood, Mead investigated the Manus tribe in Melanesia, who were pagans, looking for magical thinking in children: “animistic premise, anthropomorphic interpretation and faulty logic.”

Instead, she found “no evidence of spontaneous animistic thought in the uncontrolled sayings or games” over five months of continuous observation of a few dozen children. And while adults in the community attributed mysterious or random events to spirits and ghosts, children never did:

“I found no instance of a child’s personalizing a dog or a fish or a bird, of his personalizing the sun, the moon, the wind or stars. I found no evidence of a child’s attributing chance events, such as the drifting away of a canoe, the loss of an object, an unexplained noise, a sudden gust of wind, a strange deep-sea turtle, a falling seed from a tree, etc., to supernaturalistic causes.”

On the other hand, adults blamed spirits for hurricanes hitting the houses of people who behave badly, believed statues can talk, thought lost objects had been stolen by spirits, and said people who are insane are possessed by spirits. The grown men all thought they had personal ghosts looking out for them – with whom they communicated – but the children dismissed the reality of the ghosts that were assigned to them. They didn’t play ghost games.

Does this mean magical thinking is not inherent to childhood? Mead wrote:

“The Manus child is less spontaneously animistic and less traditionally animistic than is the Manus adult [‘traditionally’ here referring to the adoption of ritual superstitious behavior]. This result is a direct contradiction of findings in our own society, in which the child has been found to be more animistic, in both traditional and spontaneous fashions, than are his elders. When such a reversal is found in two contrasting societies, the explanation must be sought in terms of the culture; a purely psychological explanation is inadequate.”

Maybe people have the natural capacity for both animistic and realistic thinking, and societies differ in which trait they nurture and develop through children’s education and socialization. Mead speculated that the pattern she found had to do with the self-sufficiency required of Manus children. A Manus child must…

“…make correct physical adjustments to his environment, so that his entire attention is focused upon cause and effect relationships, the neglect of which would result in immediate disaster. … Manus children are taught the properties of fire and water, taught to estimate distance, to allow for illusion when objects are seen under water, to allow for obstacles and judge possible clearage for canoes, etc., at the age of two or three.”

Plus, perhaps unlike in industrialized society, their simple technology is understandable to children without the invocation of magic. And she observed that parents didn’t tell the children imaginary stories, myths, and legends.

I should note here that I’m not saying we have to choose between religious fundamentalism and a society without art and literature. The question is about believing things that aren’t true, and can’t be true. I’d like to think we can cultivate imagination without launching people down the path of blind credulity.

Modern credulity

For evidence that culture produces credulity, consider the results of a study that showed most four-year-old children understood that Old Testament stories are not factual. Six-year-olds, however, tended to believe the stories were factual, if their impossible events were attributed to God rather than rewritten in secular terms (e.g., “Matthew and the Green Sea” instead of “Moses and the Red Sea”) (5). Why? Belief in supernatural or superstitious things, contrary to what you might assume, requires a higher level of cognitive sophistication than does disbelief, which is why five-year-olds are more likely to believe in fairies than three-year-olds (6). These studies suggest children have to be taught to believe in magic. (Adults use persuasion to do that, but teaching with rewards – like presents under a tree or money under a pillow – is of course more effective.)

Children can know things either from direct observation or experience, or from being taught. So they can know dinosaurs are real if they believe books and teachers and museums, even if they can’t observe them living (true reality detection). And they can know that Santa Claus and imaginary friends are not real if they believe either authorities or their own senses (true baloney detection). Similarly, children also have two kinds of reality-assessment errors: false positive and false negative. Believing in Santa Claus is false positive. Refusing to believe in dinosaurs is false negative. In Figure 10, which I adapted from a paper by Jacqueline Woolley and Maliki Ghossainy true judgment is in regular type, errors are in italics (7).

We know a lot about kids’ credulity (Santa Claus, tooth fairy, etc.). But, Woolley and Ghossainy write, their skepticism has been neglected:

“Development regarding beliefs about reality involves, in addition to decreased reliance on knowledge and experience, increased awareness of one’s own knowledge and its limitations for assessing reality status. This realization that one’s own knowledge is limited gradually inspires a waning reliance on it alone for making reality status decisions and a concomitant increase in the use of a wider range of strategies for assessing reality status, including, for example, seeking more information, assessing contextual cues, and evaluating the quality of the new information” (8).

The “realization that one’s own knowledge is limited” is a vital development, ultimately necessary for being able to tell fact from fiction. But, sadly, it need not lead to real understanding – under some conditions, such as, apparently, the USA today, it often leads instead to reliance on misguided or dishonest authorities who compete with science to fill the void beyond what we can directly observe or deduce. Believing in Santa because we can’t disprove his existence is a developmental dead end, a backward-looking reliance on authority for determining truth. But so is failure to believe in vaccines or evolution or climate change just because we can’t see them working.

We have to learn how to avoid the italics boxes without giving up our love for things imaginary, and that seems impossible without education in both science and art.

Rationalizing gifts

What is the essence of Santa, anyway? In Kaplan’s New York Times essay it’s all about non-rationalized giving, for the sake of giving. The latest craze in Santa culture, however, says otherwise: Elf on the Shelf, which exploded on the Christmas scene after 2008, selling in the millions. In case you’ve missed it, the idea is to put a cute little elf somewhere on a shelf in the house. You tell your kids it’s watching them, and that every night it goes back to the North Pole to report to Santa on their nice/naughty ratio. While the kids are sleeping, you move it to another shelf in house, and the kids delight in finding it again each morning.

In other words, it’s the latest in Michel Foucault’s panopticon development (9). Consider the Elf on a Shelf aftermarket accessories, like the handy warning labels, which threaten children with “no toys” if they aren’t on their “best behavior” from now on. So is this non-rationalized gift giving? Quite the opposite. In fact, rather than cultivating a whimsical love of magic, this is closer to a dystopian fantasy in which the conjured enforcers of arbitrary moral codes leap out of their fictional realm to impose harsh consequences in the real life of innocent children.


My developmental question regarding inequality is this: What is the relationship between belief in Santa and social class awareness over the early life course? How long after kids realize there is class inequality do they go on believing in Santa? This is where rationalization meets fantasy. Beyond worrying about how Santa rewards or punishes them individually, if children are to believe that Christmas gifts are doled out according to moral merit, than what are they to make of the obvious fact that rich kids get more than poor kids? Rich or poor, the message seems the same: children deserve what they get.

I can’t demonstrate that believing in Santa causes children to believe that economic inequality is justified by character differences between social classes. Or that Santa belief undermines future openness to science and logic. But those are hypotheses. Between the anti-science epidemic and the pervasive assumption that poor people deserve what they get, this whole Santa enterprise seems risky. Would it be so bad, so destructive to the wonder that is childhood, if instead of attributing gifts to supernatural beings we instead told children that we just buy them gifts because we love them unconditionally and want them — and all other children — to be happy?


1. Kaplan, Eric. 2014. “Should We Believe in Santa Claus?” New York Times Opinionator, December 20.

2. Pew Research Center. 2014. “Most Say Religious Holiday Displays on Public Property Are OK.” Religion & Public Life Project, December 15.

3. The GSS asked if “people in the group [African Americans] tend to be hard-working or if they tend to be lazy,” on a scale from 1 (hardworking) to 7 (lazy). I coded them as favoring lazy if they gave scores of 5 or above. The motivation question was a yes-or-no question: “On the average African-Americans have worse jobs, income, and housing than white people. Do you think these differences are because most African-Americans just don’t have the motivation or willpower to pull themselves up out of poverty?”

4. Mead, Margaret. 1932. “An Investigation of the Thought of Primitive Children, with Special Reference to Animism.” Journal of the Royal Anthropological Institute of Great Britain and Ireland 62: 173–90.

5. Vaden, Victoria Cox, and Jacqueline D. Woolley. 2011. “Does God Make It Real? Children’s Belief in Religious Stories from the Judeo-Christian Tradition.” Child Development 82 (4): 1120–35.

6. Woolley, Jacqueline D., Elizabeth A. Boerger, and Arthur B. Markman. 2004. “A Visit from the Candy Witch: Factors Influencing Young Children’s Belief in a Novel Fantastical Being.” Developmental Science 7 (4): 456–68.

7. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

8. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

9. Pinto, Laura. 2016. “Elf et Michelf.” YouTube.

Against the generations, with video

I had the opportunity to make a presentation at the National Academies to the “Committee on the Consideration of Generational Issues in Workforce Management and Employment Practices.” If you’ve followed my posts about the “generation” terms and their use in the public sphere you understand how happy this made me.

The committee is considering a wide array of issues related to the changing workforce — under a contract from the Army — and I used the time to address the uses and misuses of cohort concepts and analysis in analyzing social change.

In the introduction, I said generational labels, e.g., “Millennials”:

encourage what’s bad about social science. It drives people toward broad generalizations, stereotyping, click bait, character judgment, and echo chamber thinking. … When we give them names and characters we start imposing qualities onto populations with absolutely no basis, or worse, on the basis of stereotyping, and then it becomes just a snowball of clickbait confirmation bias. … And no one’s really assessing whether these categories are doing us any good, but everyone’s getting a lot of clicks.

The slides I used are here in PDF. The whole presentation was captured on video, including the Q&A.

From my answer to the last question:

Cohort analysis is really important. And the life course perspective, especially on demographic things, has been very important. And as we look at changes over time in the society and the culture, things like how many times you change jobs, did you have health insurance at a certain point in your life, how crowded were your schools, what was the racial composition of your neighborhood or school when you were younger — we want to think about the shadow of these events across people’s lives and at a cultural level, not just an individual level. So it absolutely is important. … That’s a powerful way of thinking and a good opportunity to apply social science and learn from it. So I don’t want to discourage cohort thinking at all. I just want to improve it… Nothing I said should be taken to be critical of the idea of using cohorts and life course analysis in general at all.

You know, this is not my most important work. We have bigger problems in society. But understanding demographic change, how it relates to inequality, and communicating that in ways that allow us to make smarter decisions about it is my most important work. That’s why I consider this to be part of it.

Intermarriage rates relative to diversity

Addendum: Metro-area analysis added at the end.

The Pew Research Center has a new report out on race/ethnic intermarriage, which I recommend, by Gretchen Livingston and Anna Brown. This is mostly a methodological note, which also nods at some other issues.

How do you judge the amount of intermarriage? For example, in the U.S., smaller groups — Asians and American Indians — marry exogamously at higher rates. Is that because they have fewer same-race people to choose from? Or is it because Whites shun them less than they do Blacks, which are also a larger group. To answer this, you can look at the intermarriage rates relative to group size in various ways.

The Pew report gives some detail about different groups marrying each other, but the topline number is the total intermarriage rate:

In 2015, 17% of all U.S. newlyweds had a spouse of a different race or ethnicity, marking more than a fivefold increase since 1967, when 3% of newlyweds were intermarried, according to a new Pew Research Center analysis of U.S. Census Bureau data.

Here’s one way to assess that topline number, which I’ll do by state just to illustrate the variation in the U.S. (and then I repeat this by metro area below, by popular request).*

The American Community Survey (which I download from identified people who married within the previous 12 months, whom I’ll call newlyweds. I use the 2011-2015 combined data file to increase the sample size in small states. I define intermarriage a little differently than Pew does (for convenience, not because it’s better). I call a couple intermarried if they don’t match each other in a five-category scheme: White, Black, Asian/Pacific Islander, American Indian, Hispanic. I discard those newlyweds (about 2%) who are are multiracial or specified other race and not Hispanic. I only include different-sex couples.

The Herfindahl index is used by economists to measure market concentration. It looks like this:

H =\sum_{i=1}^N s_i^2

where si is the market share of firm i in the market, and N is the number of firms. It’s the sum of the squared proportions held by each firm (or race/ethnicity). The higher the score, the greater the concentration. In race/ethnic terms, if you subtract the Herfindahl index from 1, you get the probability that two randomly selected people are in a different race/ethnic group, which I call diversity.

Consider Maine. In my analysis of newlyweds in 2011-2015, 4.55% were intermarried as defined above. The diversity calculation for Maine looks like this (ignore the scale):


So in Maine two newlyweds have a 5.2% chance of being intermarried if you scramble up the marriage applications, compared with 4.6% who are actually intermarried. (A very important decision here is to use the newlywed population to calculate diversity, instead of the single population or the total population; it’s easy to change that.) Taking the ratio of these, I calculate that Maine is operating at 87% of its intermarriage potential (4.55 / 5.23). Maybe call it a diversity-adjusted intermarriage propensity. So here are all the states (and D.C.), showing diversity and intermarriage. (The diagonal line shows what you’d get if people married at random; the two illegible clusters are DC+NY and WA+KS; click to enlarge.)

State intermarriage

How far each state is off the line is the diversity-adjusted intermarriage propensity (intermarriage divided by diversity). Here is is in map form (using maptile):


And here are the same calculations for the top 50 metro areas (in terms of number of newlyweds in the sample). I chose the top 50 by sample size of newlyweds, by which the smallest is Tucson, with a sample of 478. First, the figure (click to enlarge):

State intermarriage

And here’s the list of metro areas, sorted by diversity-adjusted intermarriage propensity:

Diversity-adjusted intermarriage propensity
Birmingham-Hoover, AL .083
Memphis, TN-MS-AR .127
Richmond, VA .133
Atlanta-Sandy Springs-Roswell, GA .147
Detroit-Warren-Dearborn, MI .155
Philadelphia-Camden-Wilmington, PA-NJ-D .157
Louisville/Jefferson County, KY-IN .170
Columbus, OH .188
Baltimore-Columbia-Towson, MD .197
St. Louis, MO-IL .204
Nashville-Davidson–Murfreesboro–Frank .206
Cleveland-Elyria, OH .213
Pittsburgh, PA .215
Dallas-Fort Worth-Arlington, TX .219
New York-Newark-Jersey City, NY-NJ-PA .220
Virginia Beach-Norfolk-Newport News, VA .224
Washington-Arlington-Alexandria, DC-VA- .224
New Orleans-Metairie, LA .229
Jacksonville, FL .234
Houston-The Woodlands-Sugar Land, TX .235
Los Angeles-Long Beach-Anaheim, CA .239
Indianapolis-Carmel-Anderson, IN .246
Chicago-Naperville-Elgin, IL-IN-WI .249
Charlotte-Concord-Gastonia, NC-SC .253
Raleigh, NC .264
Cincinnati, OH-KY-IN .266
Providence-Warwick, RI-MA .278
Milwaukee-Waukesha-West Allis, WI .284
Tampa-St. Petersburg-Clearwater, FL .286
San Francisco-Oakland-Hayward, CA .287
Orlando-Kissimmee-Sanford, FL .295
Boston-Cambridge-Newton, MA-NH .305
Buffalo-Cheektowaga-Niagara Falls, NY .305
Riverside-San Bernardino-Ontario, CA .311
Miami-Fort Lauderdale-West Palm Beach, .312
San Jose-Sunnyvale-Santa Clara, CA .316
Austin-Round Rock, TX .318
Kansas City, MO-KS .342
San Diego-Carlsbad, CA .343
Sacramento–Roseville–Arden-Arcade, CA .345
Minneapolis-St. Paul-Bloomington, MN-WI .345
Seattle-Tacoma-Bellevue, WA .346
Phoenix-Mesa-Scottsdale, AZ .362
Tucson, AZ .363
Portland-Vancouver-Hillsboro, OR-WA .378
San Antonio-New Braunfels, TX .388
Denver-Aurora-Lakewood, CO .396
Las Vegas-Henderson-Paradise, NV .406
Provo-Orem, UT .421
Salt Lake City, UT .473

At a glance no big surprises compared to the state list. Feel free to draw your own conclusions in the comments.

* I put the data, codebook, code, and spreadsheet files on the Open Science Framework here, for both states and metro areas.

Now-you-know data graphic series

As I go about my day, revising my textbook, arguing with Trump supporters online, and looking at data, I keep an eye out for easily-told data short stories. I’ve been putting them on Twitter under the label Now You Know, and people seem to appreciate it, so here are some of them. Happy to discuss implications or data issues in the comments.

1. The percentage of women with a child under age 1 rose rapidly to the late 1990s and then stalled out. The difference between these two lines is the percentage of such women who have a job but were not at work the week of the survey, which may mean they are on leave. That gap is also not growing much anymore, which might or might not be good.

2. In the long run both the dramatic rise and complete stall of women’s employment rates are striking. I’m not as agitated about the decline in employment rates for men as some are, but it’s there, too.

3. What looked in 2007 like a big shift among mothers away from paid work as an ideal — greater desire for part-time work among employed mothers, more desire for no work among at-home mothers — hasn’t held up. From a repeated Pew survey. Maybe people have looked this from other sources, too, so we can tell whether these are sample fluctuations or more durable swings.

4. Over age 50 or so divorce is dominated by people who’ve been married more than once, especially in the range 65-74 — Baby Boomers, mostly — where 60% of divorcers have been married more than once.


5. People with higher levels of education receive more of the child support they are supposed to get.


Fertility trends and the myth of Millennials

The other day I showed trends in employment and marriage rates, and made the argument that the generational term “Millennial” and others are not useful: they are imposed before analyzing data and then trends are shoe-horned into the categories. When you look closely you see that the delineation of “generations” is arbitrary and usually wrong.

Here’s another example: fertility patterns. By the definition of “Millennial” used by Pew and others, the generation is supposed to have begun with those born after 1980. When you look at birth rates, however,  you see a dramatic disruption within that group, possibly triggered by the timing of the 2009 recession in their formative years.

I do this by using the American Community Survey, conducted annually from 2001 to 2015, which asks women if they have had a birth in the previous year. The samples are very large, with all the data points shown including at least 8,000 women and most including more than 60,000.

The figure below shows the birth rates by age for women across six five-year birth cohorts. The dots on each line mark the age at which the midpoint of each cohort reached 2009. The oldest three groups are supposed to be “Generation X.” The three youngest groups shown in yellow, blue, and green — those born 1980-84, 1985-89, and 1990-94 — are all Millennials according to the common myth. But look how their experience differs!

cohort birth rates ACS.xlsx

Most of the fertility effect on the recession was felt at young ages, as women postponed births. The oldest Millennial group was in their late twenties when the recession hit, and it appears their fertility was not dramatically affected. The 1985-89 group clearly took a big hit before rebounding. And the youngest group started their childbearing years under the burden of the economic crisis, and if that curve at 25 holds they will not recover. Within this arbitrarily-constructed “generation” is a great divergence of experience driven by the timing of the great recession within their early childbearing years.

You could collapse these these six arbitrary birth cohorts into two arbitrary “generations,” and you would see some of the difference I describe. I did that for you in the next figure, which is made from the same data. And you could make up some story about the character and personality of Millennials versus previous generations to fit that data, but you would be losing a lot of information to do that.

cohort birth rates ACS.xlsx

Of course, any categories reduce information — even single years of age — so that’s OK. The problem is when you treat the boundaries between categories as meaningful before you look at the data — in the absence of evidence that they are real with regard to the question at hand.

Two examples of why “Millennials” is wrong

When you make up “generation” labels for arbitrary groups based on year of birth, and start attributing personality traits, behaviors, and experiences to them as if they are an actual group, you add more noise than light to our understanding of social trends.

According to generation-guru Pew Research, “millennials” are born during the years 1981-1997. A Pew essay explaining the generations carefully explains that the divisions are arbitrary, and then proceeds to analyze data according to these divisions as if are already real. (In fact, in the one place the essay talks about differences within generations, with regard to political attitudes, it’s clear that there is no political consistency within them, as they have to differentiate between “early” and “late” members of each “generation.”)

Amazingly, despite countless media reports on these “generations,” especially millennials, in a 2015 Pew survey only 40% of people who are supposed to be millennials could pick the name out of a lineup — that is, asked, “These are some commonly used names for generations. Which of these, if any, do you consider yourself to be?”, and then given the generation names (silent, baby boom, X, millennial), 40% of people born after 1980 picked “millennial.”

“What do they know?” You’re saying. “Millennials.

Two examples

The generational labels we’re currently saddled with create false divisions between groups that aren’t really groups, and then obscure important variation within the groups that are arbitrarily lumped together. Here is just one example: the employment experience of young men around the 2009 recession.

In this figure, I’ve taken three birth cohorts: men born four years apart in 1983, 1987, and 1991 — all “millennials” by the Pew definition. Using data from the 2001-2015 American Community Surveys via, the figure shows their employment rates by age, with 2009 marked for each, coming at age 26, 22, and 18 respectively.


Each group took a big hit, but their recoveries look pretty different, with the earlier cohort not recovered as of 2015, while the youngest 1991 group bounced up to surpass the employment rates of the 1987s by age 24. Timing matters. I reckon the year they hit that great recession matters more in their lives than the arbitrary lumping of them all together compared with some other older “generations.”

Next, marriage rates. Here I use the Current Population Survey and analyze the percentage of young adults married by year of birth for people ages 18-29. This is from a regression that controls for year of age and sex, so it can be interpreted as marriage rates for young adults (click to enlarge).


From the beginning of the Baby Boom generation to those born through 1987 (who turned 29 in 2016, the last year of CPS data), the marriage rate fell from 57% to 21%, or 36 percentage points. Most of that change, 22 points, occurred within the Baby Boom. The marriage experience of the “early” and “late” Baby Boomers is not comparable at all. The subsequent “generations” are also marked by continuously falling marriage rates, with no clear demarcation between the groups. (There is probably some fancy math someone could do to confirm that, with regard to marriage experience, group membership by these arbitrary criteria doesn’t tell you more than any other arbitrary grouping would.)

Anyway, there are lots of fascinating and important ways that birth cohort — or other cohort identifiers — matter in people’s lives. And we could learn more about them if we looked at the data before imposing the categories.

Couple fact patterns about sexuality and attitudes

Working on the second edition of my book, The Family, involves updating facts as well as rethinking their presentation, and the choice of what to include. The only way I can do that is by making figures to look at myself. Here are some things I’ve worked up recently; they might not end up in the book, but I think they’re useful anyway.

1. Attitudes on sexuality and related family matters continue to grow more accepting or tolerant, but acceptance of homosexuality is growing faster than the others – at least those measured in the repeated Gallup surveys:


2. Not surprisingly, there is wide divergence in the acceptance of homosexuality across religious groups. This uses the Pew Religious Landscape Study, which includes breakouts for atheists, agnostics, and two kinds of “nones,” or unaffiliated people — those for whom religion is important and those for whom it’s not:


3. Updated same-sex behavior and attraction figures from the National Survey of Family Growth. For some reason the NSFG reports don’t include the rates of same-sex partner behavior in the previous 12 months for women anymore, so I analyzed the data myself, and found a much lower rate of last-year behavior among women than they reported before (which, when I think about it, was unreasonably high – almost as high as the ever-had-same-sex-partner rates for women). Anyway, here it is:


FYI, people who follow me on Twitter get some of this stuff quicker; people who follow on Instagram get it later or not at all.

On Asian-American earnings

In a previous post I showed that generalizations about Asian-American incomes often are misleading, as some groups have above-average incomes and some have below-average incomes (also, divorce rates) and that inequality within Asian-American groups was large as well. In this post I briefly expand that to show breakdowns in individual earnings by gender and national-origin group.

The point is basically the same: This category is usually not useful for economic statistics, and should usually be dropped for data on specific groups when possible.

Today’s news

What’s new is a Pew report by Eileen Patten showing trends in race and gender wage gaps. The report isn’t focused on Asian-American earnings, but they stand out in their charts. This led Charles Murray, who is fixated on what he believes is the genetic origin of Asian cognitive superiority, to tweet sarcastically, “Oppose Asian male privilege!” Here is one of Pew’s charts:


The figure, using the Current Population Survey (CPS), shows Asian men earning about 14.5% more per hour than White men, and Asian women earning 11% more than White women. This is not wrong, exactly, but it’s not good information either, as I’ll argue below.

First a note on data

The CPS data is better for some labor force questions (including wages) than the American Community Survey, which is much larger. However, it’s too small a sample to get into detail on Asian subgroups (notice the Pew report doesn’t mention American Indians, an even smaller group). To do that I will need to activate the ACS, which is better for race/ethnic detail.

As a reminder, this is the “race” question on the 2014 American Community Survey, which I use for this post:


There is no “Asian” or “Pacific Islander” box to check. So what do you do if you are thinking, “I’m Asian, what do I check?” The question is premised on that assumption that is not what you’re thinking. Instead, you choose from a list of national origins, which the Census Bureau then combines to make “Asian” (the first 7 boxes) and “Pacific Islander” (the last 3) categories. And you can check as many as you like, which is good because there’s a lot of intermarriage among Asians, and between Asians and other groups (mostly Whites). This is a lot like the Hispanic origin question, which also lists national origins — except that question is prefaced by the unifying phrase, “Is Person 1 of Hispanic, Latino, or Spanish origin?” before listing the options, each beginning with “Yes”, as in “Yes, Cuban.”

Although changes have not been announced, it is likely that future questions will combine the race and Hispanic-origin questions, and also preface the Asian categories with the umbrella term. This may mark the progress of getting Asian immigrants to internalize the American racial classification system, so that descendants from groups that in some cases have centuries-old cultural differentiation start to identify and label themselves as from the same racial group (who would have put Pakistanis and Japanese in the same “race” group 100 years ago?). It’s hard to make this progress, naturally, when so many people from these groups are immigrants — in my sample below, for example, 75% of the full-time, year-round workers are foreign-born.


The problem with the earnings chart Pew posted, and which Charles Murray loved, is that it lumps all the different Asian-origin groups together. That is not crazy but it’s not really good. Of course every group has diversity within it, so any category masks differences, but in my opinion this Asian grouping is worse in that regard than most. If someone argued that all these groups see themselves as united under a common identity that would push me in the direction of dropping this complaint. In any event, the diversity is interesting even if you don’t object to the Pew/Census grouping.

Here are two breakouts. The first is immigration. As I noted, 75% of the full-time, year-round workers (excluding self-employed people, like Pew does) with an Asian/Pacific Islander (Asian for short) racial identification are foreign born. That ranges from less than 4% for Hawaiians, to around 20% for the White+Asian multiple-race people, to more than 90% for Asian Indian men. It turns out that the wage advantage is mostly concentrated among these immigrants. Here is a replication of the Pew chart using the ACS data (a little different because I had to use FTFY workers), using the same colors. On the left is their chart, on the right is the same data limited to US-born workers.


Among the US-born workers the Asian male advantage is reduced from 14.5% to 4.2% (the women’s advantage is not much changed; as in Pew’s chart, Hispanics are a mutually exclusive category.) There are some very high-earning Asian immigrants, especially Indians. Here are the breakdowns, by gender, comparing each of the larger Asian-American groups to Whites:


Seven groups of men and nine groups of women have hourly earnings higher than Whites’, while nine groups of men and seven groups have women have lower earnings. In fact, among Laotians, Hawaiians, and Hmong, even the men earn less than White women. (Note, in my old post, I showed that Asian household incomes are not as high as they look when they are compared instead with those of their local peers, because they are concentrated in expensive metropolitan markets.)

Sometimes when I have a situation like this I just drop the relatively small, complex group, which leads some people to accuse me of trying to skew results. (For example, I might show a chart that has Blacks in the worst position, even though American Indians have it even worse.)

But generalization has consequences, so we should use it judiciously. In most cases “Asian” doesn’t work well. It may make more sense to group people by regions, such as East-, South-, and Southeast Asia, and/or according to immigrant status.

Old people are getting older and younger

The Pew Research Center recently put out a report on the share of U.S. older women living alone. The main finding they reported was a reversal in the long trend toward old women living alone after 1990. After rising to a peak of 38% in 1990, the share of women age 65+ living alone fell to 32% by 2014. It’s a big turnaround. The report attributes it in part to the rising life expectancy of men, so fewer old women are widowed.


The tricky thing about this is the changing age distribution of the old population (the Pew report breaks the group down into 65-84 versus 85+, but doesn’t dwell on the changing relative size of those two groups). Here’s an additional breakdown, from the same Census data Pew used (from, showing percent living alone by age for women:


Two things in this figure: the percent living alone is much lower for the 65-69s, and the decline in living alone is much sharper in the older women.

The age distribution in the 65+ population has changed in two ways: in the long run it’s getting older as life expectancy at old age increases. However, the Baby Boom (born 1946-1964) started hitting age 65 in 2010, resulting in a big wave of 65-69s pouring into the 65+ population. You can see both trends in the following figure, which shows the age distribution of the 65+ women (the lines sum to 100%). The representation of 80+ women has doubled since 1960, showing longer life expectancy, but look at that spike in the 65-69s!


Given this change in the trends, you can see that the decrease in living alone in the 65+ population partly reflects greater representation of young-old women in the population. These women are less likely to live alone because they’re more likely to still be married.

On the other hand, why is there such a steep drop in living alone among 80+ women? Some of this is the decline in widowhood as men live longer. But it’s an uphill climb, because among this group there is no Baby Boom spike of young-olds (yet) — the 80+ population is still just getting older and older. Here’s the age distribution among 80+ women (these sum to 100 again):


You can see the falling share of 80-84s as the population ages. If this is the group that is less likely to live alone the most because their husbands are living longer, that’s pretty impressive, because the group is aging fast. One boost the not-alones get is that they are increasingly likely to live in extended households — since 1990 there’s been a 5% increase in them living in households of at least 3 people, from 13% to 18%. Finally, at this age you also have to look at the share living in nursing homes (some of whom seem to be counted as living alone and some not).

In addition to the interesting gerentological questions this all raises, it’s a good reminder that the Baby Boom can have sudden effects on within-group age distributions (as I discussed previously in this post on changing White mortality patterns). Everyone should check their within-group distributions when assessing trends over time.

Millennial, save thyself

When you see a tweet like this, you have to think, “What could go wrong?”


Ironically, the National Review blog post in question, by Brad Wilcox, was called, “What Could Go Wrong? Millennials are underemployed, unhitched, and unchurched at record rates.” In it he riffs off of the new Pew Research Center report, “Millennials in Adulthood.” His thesis is this:

Millennial ties to the core human institutions that have sustained the American experiment — work, marriage, and civil society — are worryingly weak.

Just a couple of completely wrong things about this. Apart from the marriage issue, about which we’ve long since learned Wilcox does not know what he’s talking, look at what he says about work:

 In fact, full-time employment for young men remains at or near record lows. This matters because full-time work remains the best way to avoid poverty and to chart a path into the middle class for ordinary Americans. Work also affords most Americans an important sense of dignity and meaning — the psychological boost provided by what American Enterprise Institute president Arthur Brooks calls a sense of “earned success.”

After that big setup to a link to his boss at AEI, Wilcox shows this figure, the source for which is not revealed, but it’s presumably drawn from the Current Population Survey (though I didn’t realized CPS already goes three clicks beyond 2013):


Anyway, the scary line downward there is for 20-24 year-olds. How awful that they are so disconnected from the labor force these days, not developing their sense of “earned success.” I attempted to recreate that trend here, using the IPUMS extractor:

20-24-lfThat’s some drop in labor force participation since the peak at 77% in 2001, all the way down to 69% in 2013. So, what are they doing instead? Oh, right:

20-24-lf-educThe percentage of 20-24 year-olds attending school increased from 29% in 1990 to 41% in 2013. Altogether, the percentage in either school or the labor force (and some are doing both) has increased slightly. How bad is that? (I suspect this pattern would hold for the other age groups in Wilcox’s figure as well, but the CPS question on school enrollment was only asked of people under age 25. Note also the CPS excludes incarcerated people, which includes a lot of young people.)

So, unless you think education is bad for ties to “core human institutions,” that’s just wrong.

Happy yet?

After marriage, Wilcox moves to civil society, “measured here by religion” (don’t get me started). Obviously, religion is down. And then his conclusion about work, marriage and religion together:

Why does this matter? Historically, these core institutions have furnished meaning, money, and social support to generation after generation of Americans. Even today, data from the 2006–2012 General Social Survey suggest that, taken together, these institutions remain strongly linked to a sense of happiness among today’s Millennials. For instance, 58 percent of Millennial men who were married, employed full-time, and regular religious attendees reported that they are very happy in life; by contrast, only 25 percent of Millennial men who were unmarried, not working full-time, and religiously disengaged reported that they are very happy in life.

What is this, “taken together”? What if I told you that people who millionaires, love hot dogs, and have blue eyes are much richer than people who are not millionaires, hate hot dogs, and have brown eyes? Would that mean that, “taken together,” these factors “remain strongly linked”?

This is easily tested with the publicly available GSS data. I used Pew’s definition of Mellennial (age 18-33 in 2014, so born in the years 1981-1995) and found 676 men in the pooled sample for 2006-2012. There is a strong relationship with “happiness” here, but it is not with all three of these American-dream elements, it’s just with marriage.

I used ordinary least squares regression to predict being “very happy” according to whether the men report attending religious services twice per month or more, being employed full-time, and being married (logistic regression gives the same pattern but is harder to interpret). Then, for the “strongly linked” concept, I created a dummy variable indicating those men who had the Wilcox trifecta — all three good things (there were all of 34 such men in the sample). Wilcox’s claim is that these elements are “strongly linked,” implying all three is greater than the sum of the three separately.

Here are the results:

Predicting “Very Happy” among Mellennial men: General Social Survey

2006-2012 (OLS; N=676)

Coef P>|t| Coef P>|t| Coef P>|t|
Religious service at 2x+/month .07 .08 .02 .61 .03 .46
Employed full-time .06 .08 .01 .69 .02 .62
Married .29 <.001 .28 <.001 .30 <.001
Wilcox trifecta (all three)  —  — -.07 .48

However you slice it, married men born between 1981 and 1995 are more likely to say they are “very happy” than those who aren’t married. Cheerful bastards. On the other hand, going to church and having a full-time job aren’t significantly associated with very happiness. And the greater-than-the-sum hypothesis fails.

It’s also the case that having a full-time job, being married, and going to church aren’t highly correlated — especially work and church, which aren’t correlated at all (.001). I don’t think you can say these three elements are “strongly linked” to very happiness, or to each other.

Kids these days

But the details don’t matter when the kids-these-days, moral-sky-is-falling story is so firmly dug in. This is his final point:

Perhaps more worrisome, however, is the erosion of trust documented among the Millennial generation in the new Pew report. Only 19 percent of Millennials say that “most people can be trusted” — a response rate that marks them as much less trusting of their fellow citizens than were earlier generations of Americans, as the figure below shows.

But that’s actually not what the figure shows:


The Gen X folks in the Pew survey are ages 34-49, the Millennials are 18-33, or 16 years younger. So in fact the figure shows that Millennials are almost exactly where Gen X was when they were 18-33, in the mid-1990s — about 20% trusting. No (recent) generational change.

So, back to the Charles Murray tweet. Isn’t it shocking that when someone agrees with him in the conclusions, he thinks they’re brilliant in the analysis?