Pew response and attempted clarification

First, a response from Pew, then a partial data clarification on generations. In response to my Washington Post Op-Ed on generations, Generation labels mean nothing. It’s time to retire them, Kim Parker, the director of social trends research at the Pew Research Center, published a letter that read:

Philip N. Cohen criticized the use of generation labels. Generations are one of many analytical lenses researchers use to understand societal change and differences across groups. While there are limitations to generational analysis, it can be a useful tool for understanding demographic trends and shifting public attitudes. For example, a generational look at public opinion on a wide range of social and political issues shows that cohort differences have widened over time on some issues, which could have important implications for the future of American politics.

In addition, looking at how a new generation of young adults experiences key milestones such as educational attainment, marriage or homeownership, compared with previous generations in their youth, can lend important insights into changes in American society.

To be sure, these labels can be misused and lead to stereotyping, and it’s important to stress and highlight diversity within generations. At Pew Research Center, we consistently endeavor to refine and improve our research methods. Therefore, we are having ongoing conversations around the best way to approach generational research. We look forward to engaging with Mr. Cohen and other scholars as we continue to explore this complex and important issue.

Kim Parker, Washington

I was happy to see this, and look forward to what they come up with. I am also glad to see that there has been no substantial defense of the current “generations” research regime. Some people on social media said they kind of like the categories, but no researcher has said they make sense, or pointed to any research justifying the current categories. With regard to her point that generations research is useful, that was in our open letter, and in my op-ed. Cohorts (and, if you want to call a bunch of a cohorts a generation, generations) matter a lot, and should be studied. They just shouldn’t be used with imposed fixed categories regardless of the data involved, and given names with stereotyped qualities that are presumed to extend across spheres of social life.

Several people have asked me for suggestions. My basic suggestion is to do like you learned in social science class, and use categories that make sense for a good reason. If you have no reason to use a set of categories, don’t use them. Instead, use an empty measure of time, like years or decades, as a first pass, and look at the data. As I argued here, there is not likely to be a set of birth years that cohere across time and social space into meaningful generational identities.

Data question

In the Op-Ed, I wrote this: “Generation labels, although widely adopted by the public, have no basis in social reality. In fact, in one of Pew’s own surveys, most people did not identify the correct generation for themselves — even when they were shown a list of options.” The link was to this 2015 report titled, “Most Millennials Resist the ‘Millennial’ Label” (which of course confirms a stereotype about this supposed generation). I was looking in particular at this graphic, which I have shown often:

It doesn’t exactly show what portion of people “correctly” identify their category, but I eyeballed it and decided that if only 18% of Silents and 40% of Millennials were right, there was no way Gen X and Boomers were bringing the average over 50%. Also, people could choose multiple labels, so those “correct” numbers was presumably inflated to some degree by double-clickers. Anyway, the figure doesn’t exactly answer the question.

The data for that figure come from Pew’s American Trends Panel Wave 10, from 2015. The cool thing is you can download the data here. So I figured I could do a little analysis of who “correctly” identifies their category. Unfortunately, the microdata file they share doesn’t include exact age, just age in four categories that don’t line up with the generations — so you can’t replicate their analysis.

However, they do provide a little more detail in the topline report, here, including reporting the percentage of people in each “generation” who identified with each category. Using those numbers, I figure that 57% selected the correct category, 26% selected an incorrect category, 9% selected “other” (unspecified in the report), and 8% are unaccounted for. So, keeping in mind that people can be in more than one of these groups, I can’t say how many were completely “correct,” but I can say that (according to the report, not the data, which I can’t analyze for this) 57% at least selected the category that matched their birth year, possibly in combination with other categories.

The survey also asked people “how well would you say they term [generation you chose] applies to you?” If you combine “very well” and “fairly well,” you learn, for example, that actual “Silents” are more likely to say “Greatest Generation” applies well to them (32%) than say “Silent” does (14%). Anyway, if I did this right, based on the total sample, 46% of people both “correctly” identified their generation title, and said the term describes them “well.” I honestly don’t know what to make of this, but thought I’d share it, since it could be read as me misstating the case in the Op-Ed.

Now in the Washington Post: The generation labels mean nothing. It’s time to retire them

The Washington Post has published my Opinion piece on generation labels, “The generation labels mean nothing. It’s time we retired them.” They even commissioned art, which moves!

by Tara Jacoby, for The Washington Post

This follows the series of posts on this blog, going back a few years, you can read under the generations tag.

You can read or sign the open letter to the Pew Research Center here.

Why you’ll never establish the existence of distinct “generations” in American society

An update from Pew, today’s thoughts, and then another data exercise.

Pew response

After sending it the folks in charge at the Pew Research Center, I received a very friendly email response to our open letter on generation labels. They thanked me and reported that they already had plans to begin an internal discussion about “generational research” and will be consulting with experts as they do, although the timeline was not given. I take this to mean we have a bona fide opportunity to change course on this issue, both with Pew (which has outsized influence) and more widely in the coming months. But the outcome is not assured. If you agree that the “generations” labels and surrounding discourse are causing more harm than good, for researchers and the public, I hope you will join with me and 140+ social scientists who have signed the letter so far, by signing and sharing the letter (especially to people who aren’t on Twitter). Thanks!

avocado toast

Why “generations” won’t work

Never say never, but I don’t see how it will be possible to identify coherent, identifiable, stable, collectively recognized and popularly understood “generation” categories, based on year of birth, that reliably map onto a diverse set of measurable social indicators. If I’m right about that, which is an empirical question, then whether Pew’s “generations” are correctly defined will never be resolved, because the goal is unattainable. Some other set of birth-year cutoffs might work better for one question or another, but we’re not going to find a set of fixed divisions that works across arenas — such as social attitudes, family behavior, and economic status. So we should instead work on weaning the clicking public from its dependence on the concept and get down to the business of researching social trends (including cohort patterns), and communicating about that research in ways that are intelligible and useful.

Here are some reasons why we don’t find a good set of “generation” boundaries.

1. Mass media and social media mean there are no unique collective experiences

When something “happens” to a particular cohort, lots of other people are affected, too. Adjacent people react, discuss, buy stuff, and define themselves in ways that are affected by these historical events. Gradations emerge. The lines between who is and is not affected can’t be sharply drawn by age.

2. Experiences may be unique, but they don’t map neatly onto attitudes or adjacent behaviors

Even if you can identify something that happened to a specific age group at a specific point in time, the effects of such an experience will be diffuse. To name a few prominent examples: some people grew up in the era of mass incarceration and faced higher risks of being imprisoned, some people entered the job market in 2009 and suffered long-term consequences for their career trajectories, and some people came of age with the Pill. But these experiences don’t mark those people for distinct attitudes or behaviors. Having been incarcerated, unemployed, or in control of your pregnancy may influence attitudes and behaviors, but it won’t set people categorically apart. People whose friends or parents were incarcerated are affected, too; grandparents with unemployed people sleeping on their couches are affected by recessions; people who work in daycare centers are affected by birth trends. And, of course, African Americans have a unique experience with mass incarceration, rich people can ride out recessions, and the Pill is for women. When it comes to indicators of the kind we can measure, effects of these experiences will usually be marginal, not discrete, and not universal. (Plus, as cool new research shows, most people don’t change their minds much after they reach adulthood, so any effects of life experience on attitudes are swimming upstream to be observable at scale.)

3. It’s global now, too

Local experiences don’t translate directly to local attitudes and behavior because we share culture instantly around the world. So, 9/11 happened in the US but everyone knew about it (and there was also March 11 in Spain, and 7/7 in London). There are unique things about them that some people experienced — like having schools closed if you were a kid living in New York — but also general things that affected large swaths of the world, like heightened airline security. The idea of a uniquely affected age group is implausible.

4. Reflexivity

Once word gets out (through research or other means) about a particular trait or practice associated with a “generation,” like avocado toast or student debt, it gets processed and reprocessed reflexively by people who don’t, or do, want to embody a stereotype or trend for their supposed group. This includes identifying with the group itself — some people avoid it and some people embrace it, and some people react to who does the other things in other ways — until the category falls irretrievably into a vortex of cultural pastiche. The discussion of the categories, in other words, probably undermines the categories as much as it reinforces them.

If all this is true, then insisting on using stable, labeled, “generations” just boxes people into useless fixed categories. As the open letter puts it:

Predetermined cohort categories also impede scientific discovery by artificially imposing categories used in research rather than encouraging researchers to make well justified decisions for data analysis and description. We don’t want to discourage cohort and life course thinking, we want to improve it.

Mapping social change

OK, here’s today’s data exercise. There is some technical statistical content here not described in the most friendly way, I’m sorry to say. The Stata code for what follows is here, and the GSS 1972-2018 Cross-Sectional Cumulative Data file is free, here (Stata version); help yourself.

This is just me pushing at my assumptions and supplementing my reading with some tactile data machinations to help it sink in. Following on the previous exercise, here I’ll try out an empirical method for identifying meaningful birth year groupings using attitude questions from the General Social Survey, and then see if they tell us anything, relative to “empty” categories (single years or decades) and the Pew “generations” scheme (Silent, Baby Boom, Generation X, Millennials, Generation Z).

I start with five things that are different about the cohorts of nowadays versus those of the olden days in the United States. These are things that often figure in conversations about generational change. For each of these items I use one or more questions to create a single variable with a mean of 0 and a standard deviation of 1; in each case a higher score is the more liberal or newfangled view. As we’ll see, all of these moved from lower to higher scores as you look at more recent cohorts.

  • Liberal spending: Believing “we’re spending too little money on…” seven things: welfare, the environment, health, big cities, drug addiction, education, and improving the conditions of black people. (For this scale, the measure of reliability [alpha] is .66, which is pretty good.)
  • Gender attitudes: Four questions on whether women are “suited for politics,” working mothers are bad for children, and breadwinner-homemaker roles are good. High scores mean more feminist (alpha = .70).
  • Confidence in institutions: Seven questions on organized religion, the Supreme Court, the military, major companies, Congress, the scientific community, and medicine. High scores mean less confidence (alpha = .68).
  • General political views from extremely conservative to extremely liberal (one question)
  • Never-none: People who never attend religious services and have no religious affiliation (together now up to about 16% of people).

These variables span the survey years 1977 to 2018, with respondents born from 1910 to 1999 (I dropped a few born in 2000, who were just 18 years old in 2018, and those born before 1910). Because not all questions were asked of all the respondents in every year I lost a lot of people, and I had to make some hard choices about what to include. The sample that answered all these questions is about 5,500 people (down from almost 62,000 altogether — ouch!). Still, what I do next seems to work anyway.

Clustering generations

Once I have these five items, I combine them into a megascale (alpha = .45) which I use to represent social change. You can see in the figure that successive cohorts of respondents are moving up this scale, on average. Note that these cohorts are interviewed at different points in time; for example, a 40-year-old in 1992 is in the same cohort as a 50-year-old in 2002, while the 1977 interviews cover people born all the way back to 1910. That’s how I get so many cohorts out of interviews from just 1977 to 2018 (and why the confidence intervals get bigger for recent cohorts).

The question from this figure is whether the cohort attitude trend would be well served by some strategic cutpoints to denote cohorts (“generations” not in the reproductive sense but in the sense of people born around the same time). Treating each birth year as separate is unwieldy, and the samples are small. We could just use decades of birth, or Pew’s arbitrary “generations.” Or make up new ones, which is what I’m testing out.

So I hit on a simple way to identify cutpoints using an exploratory technique known as k means clustering. This is a simple (with computers) way to identify the most logical groups of people in a dataset. In this case I used two variables: the megascale and birth year. Stata’s k means clustering algorithm then tries to find a set of groups of cases such that the differences within them (how far each case is from the means of the two variables within the group) are as small as possible. (You tell it k, the number of groups you want.) Because cohort is a continuous variable, and megascale rises over time, the algorithm happily puts people in clusters that don’t have overlapping birth years, so I get nicely ordered cohorts. I guess for a U-shaped time pattern it would put young and old people in the same groups, which would mess this up, but that’s not the case with this pattern.

I tested 5, 6, and 7 groups, thinking more or fewer than that would not be worth it. It turns out 6 groups had the best explanatory power, so I used those. Then I did five linear regressions with the megascale as the dependent variable, a handful of control variables (age, sex, race, region, and education), and different cohort indicators. My basic check of fit is the adjusted R2, or the amount of variance explained adjusted for the number of variables. Here’s how the models did, in order from worst to best:

Cohort variable(s)Adjusted R2
Pew generations.1393
One linear cohort variable.1400
My cluster categories.1423
Decades of birth.1424
Each year individually.1486

Each year is good for explaining variance, but too cumbersome, and the Pew “generations” were the worst (not surprising, since they weren’t concocted to answer this question — or any other question). My cluster categories were better than just entering birth cohort as a single continuous variable, and almost as good as plain decades of birth. My scheme is only six categories, which is more convenient than nine decades, so I prefer it in this case. Note I am not naming them, just reporting the birth-year clusters: 1910-1924, 1925-1937, 1938-1949, 1950-1960, 1961-1974, and 1975-1999. These are temporary and exploratory — if you used different variables you’d get different cohorts.

Here’s what they look like with my social change indicators:

Shown this way, you can see the different pace and timing of change for the different indicators — for example, gender attitudes changed most dramatically for cohorts born before 1950, the falling confidence in institutions was over by the end of the 1950s cohort, and the most recent cohort shows the greatest spike in religious never-nones. Social change is fascinating, complex, and uneven!

You can also see that the cuts I’m using here look nothing like Pew’s, which, for example, pool the Baby Boomers from birth years 1946-1964, and Millennials from 1980 to 1996. And they don’t fit some stereotypes you hear. For example, the group with the least confidence in major institutions is those born in the 1950s (a slice of Baby Boomers), not Millennials. Try to square these results with the ridiculousness that Chuck Todd recently offered up:

So the promise of American progress is something Millennials have heard a lot about, but they haven’t always experienced it personally. … And in turn they have lost confidence in institutions. There have been plenty of scandals that have cost trust in religious institutions, the military law enforcement, political parties, the banking system, all of it, trust eroded.

You could delve into the causes of trust erosion (I wrote a paper on confidence in science alone), but attributing a global decline in trust to a group called “Millennials,” one whose boundaries were declared arbitrarily, without empirical foundation, for a completely unrelated purpose, is uninformative at best. Worse, it promotes uncritical, determinist thinking, and — if it gets popular enough — encourages researchers to use the same meaningless categories to try to get in line with the pop culture pronouncements. You get lots of people using unscrutinized categories, compounding their errors. Social scientists have to do better, by showing how cohorts and life course events really are an important way to view and comprehend social change, rather than a shallow exercise in stereotyping.

Conclusion

The categories I came up with here, for which there is some (albeit slim) empirical justification, may or may not be useful. But it’s also clear from looking at the figures here, and the regression results, that there is no singularly apparent way to break down birth cohorts to understand these trends. In fact, a simple linear variable for year of birth does pretty well. These are sweeping social changes moving through a vast, interconnected population over a long time. Each birth cohort is riven with major disparities, along the stratifying lines of race/ethnicity, gender, and social class, as well as many others. There may be times when breaking people down into birth cohorts helps understand and explain these patterns, but I’m pretty sure we’re never going to find a single scheme that works best for different situations and trends. The best practice is probably to look at the trend in as much detail as possible, to check for obvious discontinuities, and then, if no breaks are apparent, use an “empty” category set, such as decades of birth, at least to start.

It will take a collective act of will be researchers. teachers, journalists, and others, to break our social change trend industry of its “generations” habit. If you’re a social scientist, I hope you’ll help by signing the letter. (I’m also happy to support other efforts besides this experts letter.)


Note on causes

Although I am talking about cohorts, and using regression models where cohort indicators are independent variables, I’m not assessing cohort effects in the sense of causality, but rather common experiences that might appear as patterns in the data. We often experience events through a cohort lens even if they are caused by our aging, or historical factors that affect everyone. How to distinguish such age, period, or cohort effects in social change is an ongoing subject of tricky research (see this from Morgan and Lee for a recent take using the GSS) , but it’s not required to address the Pew “generations” question: are there meaningful cohorts that experience events in a discernibly collective way, making them useful groups for social analysis.

Draft: Open letter to the Pew Research Center on generation labels

This post has been updated with the final signing statement and a link to the form. Thanks for sharing!

I have objected to the use of “generation” divisions and names for years (here’s the tag). Then, the other day, I saw this introduction to an episode of Meet the Press Reports, which epitomized a lot of the gibberishy nature of generationspeak (sorry about the quality).

OK, it’s ridiculous political punditry — “So as their trust in institutions wanes, will they eventually coalesce behind a single party, or will they be the ones to simply transform our political system forever?” — but it’s also generations gobbledygook. And part of what struck me was this: “millennials are now the largest generation, they have officially overtaken the Baby Boom.” Well-educated people think these things are real things, official things. We have to get off this train.

If you know the generations discourse, you know a lot of it emanates from the Pew Research Center. They do a lot of excellent research — and make a lot of that research substantially worse by cramming into the “generations” framework that they more than anyone else have popularized — have made “official.”

After seeing that clip, I put this on Twitter, and was delighted by the positive response:

So I wrote a draft of an open letter to Pew, incorporating some of the comments from Twitter. But then I decided the letter was too long. To be more effective maybe it should be more concise and less ranty. So here’s the long version, which has more background information and examples, followed by a signing version, with a link to the form to sign it. Please feel to sign if you are a demographer or other social scientist, and share the link to the form (or this post) in your networks.

Maybe if we got a lot of signatories to this, or something like it, they would take heed.


Preamble by me

Pew’s generation labels — which are widely adopted by many other individuals and institutions — encourage unhelpful social science communication, driving people toward broad generalizations, stereotyping, click bait, sweeping character judgment, and echo chamber thinking. When people assign names to generations, they encourage anointing them a character, and then imposing qualities onto whole populations without basis, or on the basis of crude stereotyping. This fuels a constant stream of myth-making and myth-busting, with circular debates about whether one generation or another fits better or worse with various of its associated stereotypes. In the absence of research about whether the generation labels are useful either scientifically or in communicating science, we are left with a lot of headlines drawing a lot of clicks, to the detriment of public understanding.

Cohort analysis and the life course perspective are important tools for studying and communicating social science. We should study the shadow, or reflection, of life events across people’s lives at a cultural level, not just an individual level. In fact, the Pew Research Center’s surveys and publications make great contributions to that end. But the vast majority of popular survey research and reporting in the “generations” vein uses data analyzed by age, cross-sectionally, with generational labels applied after the fact — it’s not cohort research at all. We shouldn’t discourage cohort and life course thinking, rather we should improve it.

Pew’s own research provides a clear basis for scrapping the “generations.” “Most Millennials Resist the ‘Millennial’ Label” was the title of a report Pew published in 2015. This is when they should have stopped — based on their own science — but instead they plowed ahead as if the “generations” were social facts that the public merely failed to understand.

This figure shows that the majority of Americans cannot correctly identify the generational label Pew has applied to them.

The concept of “generations” as applied by Pew (and many others) defies the basic reality of generations as they relate to reproductive life cycles. Pew’s “generations” are so short (now 16 years) that they bear no resemblance to reproductive generations. In 2019 the median age of a woman giving birth in the U.S. was 29. As a result, many multigenerational families include no members of some generations on Pew’s chart. For example, it asks siblings (like the tennis-champion Williams sisters, born one year apart) to identify as members of separate generations.

Perhaps due to their ubiquitous use, and Pew’s reputation as a trustworthy arbiter of social knowledge, many people think these “generations” are official facts. Chuck Todd reported on NBC News just this month, “Millennials are now the largest generation, they have officially overtaken the Baby Boom.” (NPR had already declared Millennials the largest generation seven years earlier, using a more expansive definition.) Pew has perhaps inadvertently encouraged these ill-informed perspectives, as when, for example, Richard Fry wrote for Pew, “Millennials have surpassed Baby Boomers as the nation’s largest living adult generation, according to population estimates from the U.S. Census Bureau” — despite the fact that the Census Bureau report referenced by the article made no mention of generations. Note that Chuck Todd’s meaningless graphic, which doesn’t even include ages, is also falsely attributed to the U.S. Census Bureau.

Generations are a beguiling and appealing vehicle for explaining social change, but one that is more often misleading than informative. The U.S. Army Research Institute commissioned a consensus study report from the National Academies, titled, Are Generational Categories Meaningful Distinctions for Workforce Management? The group of prominent social scientists concluded: “while dividing the workforce into generations may have appeal, doing so is not strongly supported by science and is not useful for workforce management. …many of the stereotypes about generations result from imprecise use of the terminology in the popular literature and recent research, and thus cannot adequately inform workforce management decisions.”

As one of many potential examples of such appealing, but ultimately misleading, uses of the “Millennial” generation label, consider a 2016 article by Paul Taylor, a former executive vice president of the Pew Research Center. He promised he would go beyond “clichés” to offer “observations” about Millennials — before describing them as “liberal lions…who might not roar,” “downwardly mobile,” “unlaunched,” “unmarried,” “gender role benders,” “upbeat,” “pre-Copernican,” and as an “unaffiliated, anti-hierarchical, distrustful” generation who nevertheless “get along well with their parents, respect their elders, and work well with colleagues” while being “open to different lifestyles, tolerant of different races, and first adopters of new technologies.” And their “idealism… may save the planet.”

In 2018 Pew announced that it would henceforth draw a line between “Millennials” and “Generation Z” at the year 1996. And yet they offered no substantive reason, just that “it became clear to us that it was time to determine a cutoff point between Millennials and the next generation [in] order to keep the Millennial generation analytically meaningful, and to begin looking at what might be unique about the next cohort.” In asserting that “their boundaries are not arbitrary,” the Pew announcement noted that they were assigning the same length to the Millennial Generation as they did to Generation X — both 16 years, a length that bears no relationship to reproductive generations, nor to the Baby Boom cohort, which is generally considered to be 19 years (1946-1964).

The essay that followed this announcement attempted to draw distinctions between Millennials and Generation Z, but it could not delineate a clear division, because none can be drawn. For example, it mentioned that “most Millennials came of age and entered the workforce facing the height of an economic recession,” but in 2009, the trough year for that recession, Millennials by Pew’s definition ranged from age 13 to 29. The other events mentioned — the 9/11 terrorist attacks, the election of Barack Obama, the launch of the iPhone, and the advent of social media — similarly find Millennials at a range of ages too wide to be automatically unifying in terms of experience. Why is being between 12 and 28 at the time of Obama’s election more meaningful a cohort experience than being, say, 18 to 34? No answer to this is provided, because Pew has determined the cohort categories before the logical scientific questions can be asked.

Consider a few other hypothetical examples. In the future, we might hypothesize that those who were in K-12 school during the pandemic-inflicted 2020-2021 academic year constitute a meaningful cohort. That 13-year cohort was born between 2003 and 2015, which does not correspond to one of Pew’s predetermined “generations.” For some purposes, an even narrower range might be more appropriate, such as those who graduated high school in 2020-2021 alone. Under the Pew generational regime, too many researchers, marketers, journalists, and members of the general public will look at major events like these through a pre-formed prism that distorts their ability to pursue or understand the way cohort life course experiences affect social experience.

Unlike the other “generations” in Pew’s map, the Baby Boom corresponds to a unique demographic event, painstakingly, empirically demonstrated to have begun in July 1946 and ended in mid-1964. And being part of that group has turned out to be a meaningful experience for many people — one that in fact helped give rise to the popular understanding of birth cohorts as a concept. But it does not follow that any arbitrarily grouped set of birth dates would produce a sense of identity, especially one that can be named and described on the basis of its birth years alone. It is an accident of history that the Baby Boom lasted 18 years — as far as we know having nothing to do with the length of a reproductive generation, but perhaps leading subsequent analysts to use the term “generation” to describe both Baby Boomers and subsequent cohorts.

The good researchers at Pew are in a tough spot (as are others who rely on their categories). The generations concept is tremendously appealing and hugely popular. But where does it end? Are we going to keep arbitrarily dividing the population into generations and giving them names — after “Z”? On what scientific basis would the practice continue? One might be tempted to address these problems by formalizing the process, with a conference and a dramatic launch, to make it even more “official.” But there is no scientific rationale for dividing the population arbitrarily into cohorts of any particular length for purposes of analyzing social trends, and to fix their membership a priori. Pew would do a lot more to enhance its reputation, and contribute to the public good, by publicly pulling the plug on this project.


Open letter to the Pew Research Center on generation labels

Sign the letter here.

We are demographers and other social scientists, writing to urge the Pew Research Center to stop using its generation labels (currently: Silent, Baby Boom, X, Millennial, Z). We appreciate Pew’s surveys and other research, and urge them to bring this work into better alignment with scientific principles of social research.

  1. Pew’s “generations” cause confusion.

The groups Pew calls Silent, Baby Boom, X, Millennial, and Z are birth cohorts determined by year of birth, which are not related to reproductive generations. There is further confusion because their arbitrary lengths (18, 19, 16, 16, and 16 years, respectively) have grown shorter as the age difference between parents and their children has lengthened.

  1. The division between “generations” is arbitrary and has no scientific basis.

With the exception of the Baby Boom, which was a discrete demographic event, the other “generations” have been declared and named on an ad hoc basis without empirical or theoretical justification. Pew’s own research conclusively shows that the majority of Americans cannot identify the “generations” to which Pew claims they belong. Cohorts should be delineated by “empty” periods (such as individual years, equal numbers of years, or decades) unless research on a particular topic suggests more meaningful breakdowns.

  1. Naming “generations” and fixing their birth dates promotes pseudoscience, undermines public understanding, and impedes social science research.

The “generation” names encourage assigning them a distinct character, and then imposing qualities on diverse populations without basis, resulting in the current widespread problem of crude stereotyping. This fuels a stream of circular debates about whether the various “generations” fit their associated stereotypes, which does not advance public understanding.

  1. The popular “generations” and their labels undermine important cohort and life course research

Cohort analysis and the life course perspective are important tools for studying and communicating social science. But the vast majority of popular survey research and reporting on the “generations” uses cross-sectional data, and is not cohort research at all. Predetermined cohort categories also impede scientific discovery by artificially imposing categories used in research rather than encouraging researchers to make well justified decisions for data analysis and description. We don’t want to discourage cohort and life course thinking, we want to improve it.

  1. The “generations” are widely misunderstood to be “official” categories and identities

Pew’s reputation as a trustworthy social research institution has helped fuel the false belief that the “generations” definitions and labels are social facts and official statistics. Many other individuals and organizations use Pew’s definitions in order to fit within the paradigm, compounding the problem and digging us deeper into this hole with each passing day.

  1. The “generations” scheme has become a parody and should end.

With the identification of “Generation Z,” Pew has apparently reached the end of the alphabet. Will this continue forever, with arbitrarily defined, stereotypically labeled, “generation” names sequentially added to the list? Demographic and social analysis is too important to be subjected to such a fate. No one likes to be wrong, and admitting it is difficult. We sympathize. But the sooner Pew stops digging this hole, the easier it will be to escape. A public course correction from Pew would send an important signal and help steer research and popular discourse around demographic and social issues toward greater understanding. It would also greatly enhance Pew’s reputation in the research community. We urge Pew to end this as gracefully as possible — now.

As consumers of Pew Research Center research, and experts who work in related fields ourselves, we urge the Pew Research Center to do the right thing and help put an end to the use of arbitrary and misleading “generation” labels and names.

Santa’s magic, children’s wisdom, and inequality (a timeless holiday classic essay!)

This is a preprint version of an essay in Enduring Bonds: Inequality, Marriage, Parenting, and Everything Else That Makes Families Great and Terrible, by Philip N. Cohen. Oakland, California: University of California Press. It is revised from previous essays about Santa. Read this one instead.

Eric Kaplan, channeling Francis Pharcellus Church, writes in favor of Santa Claus in the New York Times. The Church argument, written in 1897, is that (a) you can’t prove there is no Santa, so agnosticism is the strongest possible objection, and (b) Santa enriches our lives and promotes non-rationalized gift-giving, “so we might as well believe in him” (1). It’s a very common argument, identical to one employed against atheists in favor of belief in God, but more charming and whimsical when directed at killjoy Santa-deniers.

All harmless fun and existential comfort-food. But we have two problems that the Santa situation may exacerbate. First is science denial. And second is inequality. So, consider this an attempted joyicide.

Science

From Pew Research comes this Christmas news:

“In total, 65% of U.S. adults believe that all of these aspects of the Christmas story – the virgin birth, the journey of the magi, the angel’s announcement to the shepherds and the manger story – reflect events that actually happened” (2).

On some specific items, the scores were even higher. The poll found 73% of Americans believe that Jesus was born to a virgin mother – a belief even shared by 60% of college graduates. (Among Catholics agreement was 86%, among Evangelical Protestants, 96%.)

So the Santa situation is not an isolated question. We’re talking about a population with a very strong tendency to express literal belief in fantastical accounts. This Christmas story may be the soft leading edge of a more hardcore Christian fundamentalism. For the past 20 years, the General Social Survey (GSS) has found that a third of American adults agrees with the statement, “The Bible is the actual word of God and is to be taken literally, word for word,” versus two other options: “The Bible is the inspired word of God but not everything in it should be taken literally, word for word”; and, “The Bible is an ancient book of fables, legends, history, and moral precepts recorded by men.” (The “actual word of God” people are less numerous than the virgin-birth believers, but they’re related.)

Using the GSS, I analyzed people’s social attitudes according to their view of the Bible for the years 2010-2014 (see Figure 9). Controlling for their sex, age, race, education, and the year of the survey, those with more literal interpretations of the Bible are much more likely than the rest of the population to:

  • Oppose marriage rights for homosexuals
  • Agree that “people worry too much about human progress harming the environment”
  • Agree that “It is much better for everyone involved if the man is the achiever outside the home and the woman takes care of the home and family”

In addition, among non-Hispanic Whites, the literal-Bible people are more likely to rank Blacks as more lazy than hardworking, and to believe that Blacks “just don’t have the motivation or willpower to pull themselves up out of poverty” (3).

This isn’t the direction I’d like to push our culture. Of course, teaching children to believe in Santa doesn’t necessarily create “actual word of God” fundamentalists – but there’s some relationship there.

Children’s ways of knowing

Margaret Mead in 1932 reported on the notion that young children not only know less, but know differently, than adults, in a way that parallels the evolution of society over time. Children were thought to be “more closely related to the thought of the savage than to the thought of the civilized man,” with animism in “primitive” societies being similar to the spontaneous thought of young children. This goes along with the idea that believing in Santa is indicative of a state of innocence (4). In pursuit of empirical confirmation of the universality of childhood, Mead investigated the Manus tribe in Melanesia, who were pagans, looking for magical thinking in children: “animistic premise, anthropomorphic interpretation and faulty logic.”

Instead, she found “no evidence of spontaneous animistic thought in the uncontrolled sayings or games” over five months of continuous observation of a few dozen children. And while adults in the community attributed mysterious or random events to spirits and ghosts, children never did:

“I found no instance of a child’s personalizing a dog or a fish or a bird, of his personalizing the sun, the moon, the wind or stars. I found no evidence of a child’s attributing chance events, such as the drifting away of a canoe, the loss of an object, an unexplained noise, a sudden gust of wind, a strange deep-sea turtle, a falling seed from a tree, etc., to supernaturalistic causes.”

On the other hand, adults blamed spirits for hurricanes hitting the houses of people who behave badly, believed statues can talk, thought lost objects had been stolen by spirits, and said people who are insane are possessed by spirits. The grown men all thought they had personal ghosts looking out for them – with whom they communicated – but the children dismissed the reality of the ghosts that were assigned to them. They didn’t play ghost games.

Does this mean magical thinking is not inherent to childhood? Mead wrote:

“The Manus child is less spontaneously animistic and less traditionally animistic than is the Manus adult [‘traditionally’ here referring to the adoption of ritual superstitious behavior]. This result is a direct contradiction of findings in our own society, in which the child has been found to be more animistic, in both traditional and spontaneous fashions, than are his elders. When such a reversal is found in two contrasting societies, the explanation must be sought in terms of the culture; a purely psychological explanation is inadequate.”

Maybe people have the natural capacity for both animistic and realistic thinking, and societies differ in which trait they nurture and develop through children’s education and socialization. Mead speculated that the pattern she found had to do with the self-sufficiency required of Manus children. A Manus child must…

“…make correct physical adjustments to his environment, so that his entire attention is focused upon cause and effect relationships, the neglect of which would result in immediate disaster. … Manus children are taught the properties of fire and water, taught to estimate distance, to allow for illusion when objects are seen under water, to allow for obstacles and judge possible clearage for canoes, etc., at the age of two or three.”

Plus, perhaps unlike in industrialized society, their simple technology is understandable to children without the invocation of magic. And she observed that parents didn’t tell the children imaginary stories, myths, and legends.

I should note here that I’m not saying we have to choose between religious fundamentalism and a society without art and literature. The question is about believing things that aren’t true, and can’t be true. I’d like to think we can cultivate imagination without launching people down the path of blind credulity.

Modern credulity

For evidence that culture produces credulity, consider the results of a study that showed most four-year-old children understood that Old Testament stories are not factual. Six-year-olds, however, tended to believe the stories were factual, if their impossible events were attributed to God rather than rewritten in secular terms (e.g., “Matthew and the Green Sea” instead of “Moses and the Red Sea”) (5). Why? Belief in supernatural or superstitious things, contrary to what you might assume, requires a higher level of cognitive sophistication than does disbelief, which is why five-year-olds are more likely to believe in fairies than three-year-olds (6). These studies suggest children have to be taught to believe in magic. (Adults use persuasion to do that, but teaching with rewards – like presents under a tree or money under a pillow – is of course more effective.)

Children can know things either from direct observation or experience, or from being taught. So they can know dinosaurs are real if they believe books and teachers and museums, even if they can’t observe them living (true reality detection). And they can know that Santa Claus and imaginary friends are not real if they believe either authorities or their own senses (true baloney detection). Similarly, children also have two kinds of reality-assessment errors: false positive and false negative. Believing in Santa Claus is false positive. Refusing to believe in dinosaurs is false negative. In Figure 10, which I adapted from a paper by Jacqueline Woolley and Maliki Ghossainy true judgment is in regular type, errors are in italics (7).

We know a lot about kids’ credulity (Santa Claus, tooth fairy, etc.). But, Woolley and Ghossainy write, their skepticism has been neglected:

“Development regarding beliefs about reality involves, in addition to decreased reliance on knowledge and experience, increased awareness of one’s own knowledge and its limitations for assessing reality status. This realization that one’s own knowledge is limited gradually inspires a waning reliance on it alone for making reality status decisions and a concomitant increase in the use of a wider range of strategies for assessing reality status, including, for example, seeking more information, assessing contextual cues, and evaluating the quality of the new information” (8).

The “realization that one’s own knowledge is limited” is a vital development, ultimately necessary for being able to tell fact from fiction. But, sadly, it need not lead to real understanding – under some conditions, such as, apparently, the USA today, it often leads instead to reliance on misguided or dishonest authorities who compete with science to fill the void beyond what we can directly observe or deduce. Believing in Santa because we can’t disprove his existence is a developmental dead end, a backward-looking reliance on authority for determining truth. But so is failure to believe in vaccines or evolution or climate change just because we can’t see them working.

We have to learn how to avoid the italics boxes without giving up our love for things imaginary, and that seems impossible without education in both science and art.

Rationalizing gifts

What is the essence of Santa, anyway? In Kaplan’s New York Times essay it’s all about non-rationalized giving, for the sake of giving. The latest craze in Santa culture, however, says otherwise: Elf on the Shelf, which exploded on the Christmas scene after 2008, selling in the millions. In case you’ve missed it, the idea is to put a cute little elf somewhere on a shelf in the house. You tell your kids it’s watching them, and that every night it goes back to the North Pole to report to Santa on their nice/naughty ratio. While the kids are sleeping, you move it to another shelf in house, and the kids delight in finding it again each morning.

In other words, it’s the latest in Michel Foucault’s panopticon development (9). Consider the Elf on a Shelf aftermarket accessories, like the handy warning labels, which threaten children with “no toys” if they aren’t on their “best behavior” from now on. So is this non-rationalized gift giving? Quite the opposite. In fact, rather than cultivating a whimsical love of magic, this is closer to a dystopian fantasy in which the conjured enforcers of arbitrary moral codes leap out of their fictional realm to impose harsh consequences in the real life of innocent children.

Inequality

My developmental question regarding inequality is this: What is the relationship between belief in Santa and social class awareness over the early life course? How long after kids realize there is class inequality do they go on believing in Santa? This is where rationalization meets fantasy. Beyond worrying about how Santa rewards or punishes them individually, if children are to believe that Christmas gifts are doled out according to moral merit, than what are they to make of the obvious fact that rich kids get more than poor kids? Rich or poor, the message seems the same: children deserve what they get.

I can’t demonstrate that believing in Santa causes children to believe that economic inequality is justified by character differences between social classes. Or that Santa belief undermines future openness to science and logic. But those are hypotheses. Between the anti-science epidemic and the pervasive assumption that poor people deserve what they get, this whole Santa enterprise seems risky. Would it be so bad, so destructive to the wonder that is childhood, if instead of attributing gifts to supernatural beings we instead told children that we just buy them gifts because we love them unconditionally and want them — and all other children — to be happy?


Notes:

1. Kaplan, Eric. 2014. “Should We Believe in Santa Claus?” New York Times Opinionator, December 20.

2. Pew Research Center. 2014. “Most Say Religious Holiday Displays on Public Property Are OK.” Religion & Public Life Project, December 15.

3. The GSS asked if “people in the group [African Americans] tend to be hard-working or if they tend to be lazy,” on a scale from 1 (hardworking) to 7 (lazy). I coded them as favoring lazy if they gave scores of 5 or above. The motivation question was a yes-or-no question: “On the average African-Americans have worse jobs, income, and housing than white people. Do you think these differences are because most African-Americans just don’t have the motivation or willpower to pull themselves up out of poverty?”

4. Mead, Margaret. 1932. “An Investigation of the Thought of Primitive Children, with Special Reference to Animism.” Journal of the Royal Anthropological Institute of Great Britain and Ireland 62: 173–90.

5. Vaden, Victoria Cox, and Jacqueline D. Woolley. 2011. “Does God Make It Real? Children’s Belief in Religious Stories from the Judeo-Christian Tradition.” Child Development 82 (4): 1120–35.

6. Woolley, Jacqueline D., Elizabeth A. Boerger, and Arthur B. Markman. 2004. “A Visit from the Candy Witch: Factors Influencing Young Children’s Belief in a Novel Fantastical Being.” Developmental Science 7 (4): 456–68.

7. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

8. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

9. Pinto, Laura. 2016. “Elf et Michelf.” YouTube. https://www.youtube.com/watch?v=s9Pn16dCWIg.

Against the generations, with video

I had the opportunity to make a presentation at the National Academies to the “Committee on the Consideration of Generational Issues in Workforce Management and Employment Practices.” If you’ve followed my posts about the “generation” terms and their use in the public sphere you understand how happy this made me.

The committee is considering a wide array of issues related to the changing workforce — under a contract from the Army — and I used the time to address the uses and misuses of cohort concepts and analysis in analyzing social change.

In the introduction, I said generational labels, e.g., “Millennials”:

encourage what’s bad about social science. It drives people toward broad generalizations, stereotyping, click bait, character judgment, and echo chamber thinking. … When we give them names and characters we start imposing qualities onto populations with absolutely no basis, or worse, on the basis of stereotyping, and then it becomes just a snowball of clickbait confirmation bias. … And no one’s really assessing whether these categories are doing us any good, but everyone’s getting a lot of clicks.

The slides I used are here in PDF. The whole presentation was captured on video, including the Q&A.

From my answer to the last question:

Cohort analysis is really important. And the life course perspective, especially on demographic things, has been very important. And as we look at changes over time in the society and the culture, things like how many times you change jobs, did you have health insurance at a certain point in your life, how crowded were your schools, what was the racial composition of your neighborhood or school when you were younger — we want to think about the shadow of these events across people’s lives and at a cultural level, not just an individual level. So it absolutely is important. … That’s a powerful way of thinking and a good opportunity to apply social science and learn from it. So I don’t want to discourage cohort thinking at all. I just want to improve it… Nothing I said should be taken to be critical of the idea of using cohorts and life course analysis in general at all.

You know, this is not my most important work. We have bigger problems in society. But understanding demographic change, how it relates to inequality, and communicating that in ways that allow us to make smarter decisions about it is my most important work. That’s why I consider this to be part of it.

Intermarriage rates relative to diversity

Addendum: Metro-area analysis added at the end.

The Pew Research Center has a new report out on race/ethnic intermarriage, which I recommend, by Gretchen Livingston and Anna Brown. This is mostly a methodological note, which also nods at some other issues.

How do you judge the amount of intermarriage? For example, in the U.S., smaller groups — Asians and American Indians — marry exogamously at higher rates. Is that because they have fewer same-race people to choose from? Or is it because Whites shun them less than they do Blacks, which are also a larger group. To answer this, you can look at the intermarriage rates relative to group size in various ways.

The Pew report gives some detail about different groups marrying each other, but the topline number is the total intermarriage rate:

In 2015, 17% of all U.S. newlyweds had a spouse of a different race or ethnicity, marking more than a fivefold increase since 1967, when 3% of newlyweds were intermarried, according to a new Pew Research Center analysis of U.S. Census Bureau data.

Here’s one way to assess that topline number, which I’ll do by state just to illustrate the variation in the U.S. (and then I repeat this by metro area below, by popular request).*

The American Community Survey (which I download from IPUMS.org) identified people who married within the previous 12 months, whom I’ll call newlyweds. I use the 2011-2015 combined data file to increase the sample size in small states. I define intermarriage a little differently than Pew does (for convenience, not because it’s better). I call a couple intermarried if they don’t match each other in a five-category scheme: White, Black, Asian/Pacific Islander, American Indian, Hispanic. I discard those newlyweds (about 2%) who are are multiracial or specified other race and not Hispanic. I only include different-sex couples.

The Herfindahl index is used by economists to measure market concentration. It looks like this:

H =\sum_{i=1}^N s_i^2

where si is the market share of firm i in the market, and N is the number of firms. It’s the sum of the squared proportions held by each firm (or race/ethnicity). The higher the score, the greater the concentration. In race/ethnic terms, if you subtract the Herfindahl index from 1, you get the probability that two randomly selected people are in a different race/ethnic group, which I call diversity.

Consider Maine. In my analysis of newlyweds in 2011-2015, 4.55% were intermarried as defined above. The diversity calculation for Maine looks like this (ignore the scale):

me

So in Maine two newlyweds have a 5.2% chance of being intermarried if you scramble up the marriage applications, compared with 4.6% who are actually intermarried. (A very important decision here is to use the newlywed population to calculate diversity, instead of the single population or the total population; it’s easy to change that.) Taking the ratio of these, I calculate that Maine is operating at 87% of its intermarriage potential (4.55 / 5.23). Maybe call it a diversity-adjusted intermarriage propensity. So here are all the states (and D.C.), showing diversity and intermarriage. (The diagonal line shows what you’d get if people married at random; the two illegible clusters are DC+NY and WA+KS; click to enlarge.)

State intermarriage

How far each state is off the line is the diversity-adjusted intermarriage propensity (intermarriage divided by diversity). Here is is in map form (using maptile):

DAMP

And here are the same calculations for the top 50 metro areas (in terms of number of newlyweds in the sample). I chose the top 50 by sample size of newlyweds, by which the smallest is Tucson, with a sample of 478. First, the figure (click to enlarge):

State intermarriage

And here’s the list of metro areas, sorted by diversity-adjusted intermarriage propensity:

Diversity-adjusted intermarriage propensity
Birmingham-Hoover, AL .083
Memphis, TN-MS-AR .127
Richmond, VA .133
Atlanta-Sandy Springs-Roswell, GA .147
Detroit-Warren-Dearborn, MI .155
Philadelphia-Camden-Wilmington, PA-NJ-D .157
Louisville/Jefferson County, KY-IN .170
Columbus, OH .188
Baltimore-Columbia-Towson, MD .197
St. Louis, MO-IL .204
Nashville-Davidson–Murfreesboro–Frank .206
Cleveland-Elyria, OH .213
Pittsburgh, PA .215
Dallas-Fort Worth-Arlington, TX .219
New York-Newark-Jersey City, NY-NJ-PA .220
Virginia Beach-Norfolk-Newport News, VA .224
Washington-Arlington-Alexandria, DC-VA- .224
New Orleans-Metairie, LA .229
Jacksonville, FL .234
Houston-The Woodlands-Sugar Land, TX .235
Los Angeles-Long Beach-Anaheim, CA .239
Indianapolis-Carmel-Anderson, IN .246
Chicago-Naperville-Elgin, IL-IN-WI .249
Charlotte-Concord-Gastonia, NC-SC .253
Raleigh, NC .264
Cincinnati, OH-KY-IN .266
Providence-Warwick, RI-MA .278
Milwaukee-Waukesha-West Allis, WI .284
Tampa-St. Petersburg-Clearwater, FL .286
San Francisco-Oakland-Hayward, CA .287
Orlando-Kissimmee-Sanford, FL .295
Boston-Cambridge-Newton, MA-NH .305
Buffalo-Cheektowaga-Niagara Falls, NY .305
Riverside-San Bernardino-Ontario, CA .311
Miami-Fort Lauderdale-West Palm Beach, .312
San Jose-Sunnyvale-Santa Clara, CA .316
Austin-Round Rock, TX .318
Kansas City, MO-KS .342
San Diego-Carlsbad, CA .343
Sacramento–Roseville–Arden-Arcade, CA .345
Minneapolis-St. Paul-Bloomington, MN-WI .345
Seattle-Tacoma-Bellevue, WA .346
Phoenix-Mesa-Scottsdale, AZ .362
Tucson, AZ .363
Portland-Vancouver-Hillsboro, OR-WA .378
San Antonio-New Braunfels, TX .388
Denver-Aurora-Lakewood, CO .396
Las Vegas-Henderson-Paradise, NV .406
Provo-Orem, UT .421
Salt Lake City, UT .473

At a glance no big surprises compared to the state list. Feel free to draw your own conclusions in the comments.

* I put the data, codebook, code, and spreadsheet files on the Open Science Framework here, for both states and metro areas.

Now-you-know data graphic series

As I go about my day, revising my textbook, arguing with Trump supporters online, and looking at data, I keep an eye out for easily-told data short stories. I’ve been putting them on Twitter under the label Now You Know, and people seem to appreciate it, so here are some of them. Happy to discuss implications or data issues in the comments.

1. The percentage of women with a child under age 1 rose rapidly to the late 1990s and then stalled out. The difference between these two lines is the percentage of such women who have a job but were not at work the week of the survey, which may mean they are on leave. That gap is also not growing much anymore, which might or might not be good.

2. In the long run both the dramatic rise and complete stall of women’s employment rates are striking. I’m not as agitated about the decline in employment rates for men as some are, but it’s there, too.

3. What looked in 2007 like a big shift among mothers away from paid work as an ideal — greater desire for part-time work among employed mothers, more desire for no work among at-home mothers — hasn’t held up. From a repeated Pew survey. Maybe people have looked this from other sources, too, so we can tell whether these are sample fluctuations or more durable swings.

4. Over age 50 or so divorce is dominated by people who’ve been married more than once, especially in the range 65-74 — Baby Boomers, mostly — where 60% of divorcers have been married more than once.

 

5. People with higher levels of education receive more of the child support they are supposed to get.

 

Fertility trends and the myth of Millennials

The other day I showed trends in employment and marriage rates, and made the argument that the generational term “Millennial” and others are not useful: they are imposed before analyzing data and then trends are shoe-horned into the categories. When you look closely you see that the delineation of “generations” is arbitrary and usually wrong.

Here’s another example: fertility patterns. By the definition of “Millennial” used by Pew and others, the generation is supposed to have begun with those born after 1980. When you look at birth rates, however,  you see a dramatic disruption within that group, possibly triggered by the timing of the 2009 recession in their formative years.

I do this by using the American Community Survey, conducted annually from 2001 to 2015, which asks women if they have had a birth in the previous year. The samples are very large, with all the data points shown including at least 8,000 women and most including more than 60,000.

The figure below shows the birth rates by age for women across six five-year birth cohorts. The dots on each line mark the age at which the midpoint of each cohort reached 2009. The oldest three groups are supposed to be “Generation X.” The three youngest groups shown in yellow, blue, and green — those born 1980-84, 1985-89, and 1990-94 — are all Millennials according to the common myth. But look how their experience differs!

cohort birth rates ACS.xlsx

Most of the fertility effect on the recession was felt at young ages, as women postponed births. The oldest Millennial group was in their late twenties when the recession hit, and it appears their fertility was not dramatically affected. The 1985-89 group clearly took a big hit before rebounding. And the youngest group started their childbearing years under the burden of the economic crisis, and if that curve at 25 holds they will not recover. Within this arbitrarily-constructed “generation” is a great divergence of experience driven by the timing of the great recession within their early childbearing years.

You could collapse these these six arbitrary birth cohorts into two arbitrary “generations,” and you would see some of the difference I describe. I did that for you in the next figure, which is made from the same data. And you could make up some story about the character and personality of Millennials versus previous generations to fit that data, but you would be losing a lot of information to do that.

cohort birth rates ACS.xlsx

Of course, any categories reduce information — even single years of age — so that’s OK. The problem is when you treat the boundaries between categories as meaningful before you look at the data — in the absence of evidence that they are real with regard to the question at hand.