Author meets critic: Margaret K. Nelson, Like A Family

Like Family

These are notes for my discussion of Like Family, Narratives of Fictive Kinship, by Margaret K. Nelson. Author Meets Critics session at the Eastern Sociological Society, 21 Feb 2021.

Like A Family is a fascinating, enjoyable read, full of thought-provoking analysis and a lot of rich stories, with detailed scenarios that let the reader consider lots of possibilities, even those not mentioned in the text. It’s “economical prose” that suggests lots of subtext and brings to mind a lot of different questions (some of which are in the wide-ranging footnotes).

It’s about people choosing relationships, and choosing to make them be “like” family, and how that means they are and are not “like” family, and in the process tells us a lot about how people think of families altogether, in terms of bonds and obligations and language and personal history.

In my textbook I use three definitions: the legal family, the personal family, and the family as an institutional arena. This is the personal family, which is people one considers family, on the assumption or understanding they feel the same way.

Why this matters, from a demographer perspective: Most research uses household definitions of family. That’s partly because some things we have to measure, and it’s a way to make sure we only get each person once (without a population registry or universal identification), and correctly attribute births to birth parents. But it comes at a cost – we assume household definitions of family too often.

We need formal, legal categories for things like incest laws and hospital rights, and the categories take on their own power. (Note there are young adult semi-step siblings with semi-together parents living together some of the time or not wondering about the propriety of sexual relationships with each other.) Reality doesn’t just line up with demographic / legal / bureaucratic categories – there is a dance between them. As the Census “relationship” categories proliferate – from 6 in 1960 ago to about 16 today – people both want to create new relationships (which Nelson calls a “creative” move) and make their relationships fit within acceptable categories (like same-sex marriage).

Screenshot 2021-02-22 105117

Methods and design

The categories investigated here – sibling-like relationships among adults, temporary adult-adolescent relationships, and informal adoptions – are so very different it’s hard to see what they have in common except some language. The book doesn’t give the formal selection criteria, so it’s hard to know exactly how the boundaries around the sample were drawn.

Nelson uses a very inductive process: “Having identified respondents and created a typology, I could refine both my specific and more general research questions” (p. 11). Not how I think of designing research projects, which just shows the diversity among sociologists.

Over more than one chapter, there is an extended case study of Nicole and her erstwhile guardians Joyce and Don, who she fell in with when her poorer family of origin broke up, essentially. Fascinating story.

The book focuses on white, (mostly) straight middle class people. This is somewhat frustrating. The rationale is that they are understudied. So that’s useful, but it would be more challenging – I guess a challenge for subsequent research – to more actively analyze their White straight middle classness as part of the research.

Compared to what

A lot of insights in the book come from people comparing their fictive kin relationships to their other family or friend relationships. This raises a methodological issue: These are people with active fictive kin relationships, so it’s a tricky sample from which to draw for understanding non-fictive relationships – it’s select. It would be nice in an ideal world to have a bigger sample without restriction and ask people about all their relationships and then compare fictive and non-fictive. Understandable not to have that, but needs to be wrestled with (by people doing future research).

Nelson establishes that the sibling-like relationships are neither like friendships nor like family, a third category. But that’s just for these people. Maybe people without fictive kin like this have family or friend relationships that look just like this in terms of reciprocity, obligation, closeness, etc. (Applies especially to the adult-sibling-like relationships.)

Modern contingency

Great insight with regard to adult “like-sibling” relationships: It’s not just that they are not as close as “family,” it’s that they are not “like family” in the sense of “baggage,” they don’t have that “tarnished reality” – and in that sense they are like the way family relationships are moving, more volitional and individualized and contingent.

Does this research show that family relationships generally in a post-traditional era are fluid and ambiguous and subject to negotiation and choice? It’s hard to know how to read this without comparison families. But here’s a thought. John, who co-parents a teenage child named Ricky, says, “To me family means somebody is part of your life that you are committed to. You don’t have to like everything about them, but whatever they need, you’re willing to give them, and if you need something, you’re willing to ask them, and you’re willing to accept if they can or can’t give it to you” (p. 130). It’s an ideal. Is it a widespread ideal? What if non-fictive family members don’t meet that ideal? The implication may be they aren’t your family anymore. Which could be why we are seeing so many people rupturing their family of origin relationships, especially young adults breaking up with their parents.

It reminds me of what happened with marriage half a century ago, where people set a high standard, and defined relationships that didn’t meet it as “not a marriage.” Or when people say abusive families aren’t really families. Conservatives hate this, because it means you can “just” walk away from bad relationships. There are pros and cons to this view.

Nelson writes at the end of the chapter on informal parents, “The possibility is always there that either party will, at some point in the near or distant future, make a different choice. That is both the simple delight and the heartrending anxiety of these relationships” (p. 133). We can’t know, however, how unique such feelings are to these relationships – I suspect not that much. This sounds so much like Anthony Giddens and the “pure” relationships of late modernity.

This contingency comes up a few times, and I always have the same question. Nelson writes in the conclusion, “Those relationships feel lighter, more buoyant, more simply based in deep-seated affection than do those they experience with their ‘real’ kin.” But that tells us how these people feel about real kin, not how everyone does. It raises a question for future research. Maybe outside this population lots of people feel the same way about their “real” kin (ask the growing number of parents who have been “unfriended” by their adult children).

I definitely recommend this book, to read, teach, and use to think about future research.

Note: In the discussion Nelson replied that most people have active fictive-kin relationships, so this sample is not so select in that respect.

Basic self-promotion

Five years ago today I wrote a post called “Basic self promotion” on here. There has been a lot of work and advice on this subject in the intervening years (including books, some of which I reviewed here). So this is not as necessary as it was then. But it holds up pretty well, with some refreshing. So here is a lightly revised version. As always, happy to have your feedback and suggestions in the comments — including other things to read.


48943567406_2ccfa2b882_k
Present yourself. PN Cohen photo: https://flic.kr/p/2hyYzqs.

If you won’t make the effort to promote your research, how can you expect others to?

These are some basic thoughts for academics promoting their research. You don’t have to be a full-time self-promoter to improve your reach and impact, but the options are daunting and I often hear people say they don’t have time to do things like run a Twitter account or write for blogs and other publications. Even a relatively small effort, if well directed, can help a lot. Don’t let the perfect be the enemy of the good. It’s fine to do some things pretty well even if you can’t do everything to your ideal standard.

It’s all about making your research better — better quality, better impact. You want more people to read and appreciate your work, not just because you want fame and fortune, but because that’s what the work is for. I welcome your comments and suggestions below.

Present yourself

Make a decent personal website and keep it up to date with information about your research, including links to freely available copies of your publications (see below). It doesn’t have to be fancy. I’m often surprised at how many people are sitting behind years-old websites. (I recently engaged Brigid Barrett, who specializes in academics’ websites, to redesign mine.)

Very often people who come across your research somewhere else will want to know more about you before they share, report on, or even cite it. Your website gives your work more credibility. Has this person published other work in this area? Taught related courses? Gotten grants? These are things people look for. It’s not vain or obnoxious to present this information, it’s your job. I recommend a good quality photo, updated at least every five years.

Make your work available

Let people read the actual research. For work not yet “published” in journals, post drafts when they are ready for readers (a good time is when you are ready to send it to a conference or journal – or earlier if you are comfortable with sharing it). This helps you establish precedence (planting your flag), and allows it to generate feedback and attract readers. It’s best to use a disciplinary archive such as SocArXiv (which, as the director, I highly recommend) or your university repository, or both. This will improve how they show up in web searches (including Google Scholar) indexed for things like citation or grant analysis, and archived. You can also get a digital object identifier (DOI), which allows them to enter the great stream of research metadata. (See the SocArXiv FAQ for more answers.)

When you do publish in journals, prefer open-access journals because it’s the right thing to do and more people can read your work there. If a paper is paywalled, share a preprint or postprint version. On your website or social media feeds, please don’t just link to the pay-walled versions of your papers, that’s the click of death for someone just browsing around, plus it’s elitist and antisocial. You can almost always put up a preprint without violating your agreements (ideally you wouldn’t publish anywhere that won’t let you do this). To see the policies of different journals regarding self-archiving, check out the simple database at SHERPA/RoMEO, or, of course, the agreement you signed with the journal.

I oppose private sites like Academia.edu, ResearchGate, or SSRN. These are just private companies making a profit from doing what your university and its library, and nonprofits like SocArXiv are already doing for the public good. Your paper will not be discovered more if it is on one of these sites.

I’m not an open access purist, believe it or not. (If you got public money to develop a cure for cancer, that’s different, then I am a purist.) Not everything we write has to be open access (books, for example), but the more it is the better, especially original research. This is partly an equity issue for readers, and partly to establish trust and accountability in all of our work. Readers should be able to see our work product – our instruments, our code, our data – to evaluate its veracity (and to benefit their own work). And for the vast majority of readers who don’t want to get into those materials, the fact they are there increases our collective accountability and trustworthiness. I recommend using the Open Science Framework, a free, nonprofit platform for research sharing and collaboration.

Actively share your work

In the old days we used to order paper reprints of papers we published and literally mail them to the famous and important people we hoped would read and cite them. Nowadays you can email them a PDF. Sending a short note that says, “I thought you might be interested in this paper I wrote” is normal, reasonable, and may be considered flattering. (As long as you don’t follow up with repeated emails asking if they’ve read it yet.)

Social media

If you’re reading this, you probably use at least basic social media. If not, I recommend it. This does not require a massive time commitment and doesn’t mean you have to spend all day doomscrolling — you can always ignore them. Setting up a public profile on Twitter or a page on Facebook gives people who do use them all the time a way to link to you and share your profile. If someone wants to show their friends one of my papers on Twitter, this doesn’t require any effort on my part. They tweet, “Look at this awesome new paper @familyunequal wrote!” (I have some vague memory of this happening with my papers.) When people click on the link they go to my profile, which tells them who I am and links to my website.

Of course, a more active social media presence does help draw people into your work, which leads to exchanging information and perspectives, getting and giving feedback, supporting and learning from others, and so on. Ideally. But even low-level attention will help: posting or tweeting links to new papers, conference presentations, other writing, etc. No need to get into snarky chitchat and following hundreds of people if you don’t want to. To see how sociologists are using Twitter, you can visit the list I maintain, which has more than 1600 sociologists. This is useful for comparing profile and feed styles.

Other writing

People who write popular books go on book tours to promote them. People who write minor articles in sociology journals might send out some tweets, or share them with their friends on Facebook. In between are lots of other places you can write something to help people find and learn about your work. I still recommend a blog format, easily associated with your website, but this can be done different ways. As with publications themselves, there are public and private options, open and paywalled. Open is better, but some opportunities are too good to pass up – and it’s OK to support publications that charge subscription or access fees, if they deserve it.

There are also good organizations now that help people get their work out. In my area, for example, the Council on Contemporary Families is great (I’m a former board member), producing research briefs related to new publications, and helping to bring them to the attention of journalists and editors. Others work with the Scholars Strategy Network, which helps people place Op-Eds, or the university-affiliated site The Society Pages, or others. In addition, there are blogs run by sections of the academic associations, and various group blogs. And there is Contexts (which I used to co-edit), the general interest magazine of ASA, where they would love to hear proposals for how you can bring your research out into the open (for the magazine or their blog).


For more on the system we use to get our work evaluated, published, transmitted, and archived, I’ve written this report: Scholarly Communication in Sociology: An introduction to scholarly communication for sociology, intended to help sociologists in their careers, while advancing an inclusive, open, equitable, and sustainable scholarly knowledge ecosystem.

Pandemic Baby Bust situation update

[Update: California released revised birth numbers, which added a trivial number to previous months, except December, where they added a few thousand, so now the state has a 10% decline for the month, relative to 2019. I hadn’t seen a revision that large before.]

Lots of people are talking about falling birth rates — even more than they were before. First a data snapshot, then a link roundup.

For US states, we have numbers through December for Arizona, California, Florida, Hawaii, and Ohio. They are all showing substantial declines in birth rates from previous years. Most dramatically, California just posted December numbers, and revised the numbers from earlier months, now showing a 19% 10% drop in December. After adding about 500 births to November and a few to October, the drop in those two months is now 9%. The state’s overall drop for the year is now 6.2%. These are, to put it mildly, very larges declines in historical terms. Even if California adds 500 to December later, it will still be down 18%. Yikes. One thing we don’t yet know is how much of this is driven by people moving around, rather than just changes in birth rates. California in 2019 had more people leaving the state (before the pandemic) than before, and presumably there have been essentially no international immigrants in 2020. Hawaii also has some “birth tourism”, which probably didn’t happen in 2020, and has had a bad year for tourism generally. So much remains to be learned.

Here are the state trends (figure updated Feb 18):

births 18-20 state small multiple by month

From the few non-US places that I’m getting monthly data so far, the trend is not so dramatic. Although British Columbia posted a steep drop in December. I don’t know why I keep hoping Scotland will settle down their numbers… (updated Feb 18):

births countries 18-20 small multiple by month

Here are some recent items from elsewhere on this topic:

  • That led to some local TV, including this from KARE11 in Minneapolis:

Good news / bad news clarification

There’s an unfortunate piece of editing in the NBCLX piece, where I’m quoted like this: “Well, this is a bad situation. [cut] The declines we’re seeing now are pretty substantial.” To clarify — and I said this in the interview, but accidents happen — I am not saying the decline in births is a bad situation, I’m saying the pandemic is a bad situation, which is causing a decline in births. Unfortunately, this has slipped. As when the Independent quoted the piece (without talking to me) and said, “Speaking to the outlet, Philip Cohen, a sociologist and demographer at the University of Maryland, called the decline a ‘bad situation’.”


The data for this project is available here: osf.io/pvz3g/. You’re free to use it.


For more on fertility decline, including whether it’s good or bad, and where it might be going, follow the fertility tag.


Acknowledgement: We have lots of good conversation about this on Twitter, where there is great demography going on. Also, Lisa Carlson, a graduate student at Bowling Green State University, who works in the National Center for Family and Marriage Research, pointed me toward some of this state data, which I appreciate.

Host, parasite, and failure at the colony level: COVID-19 and the US information ecosystem

Trump campaign attempts to remove satirical cartoon from online retailer | Comics and graphic novels | The Guardian

This cartoon is offensive. And yet.


A few months ago I did some reading about viruses and other parasites, inspired by the obvious, but also those ants that get commandeered by cordyceps fungi, as seen in this awesome Richard Attenborough video:

Besides the incredible feat of programming ants to disseminate fungus spores, the video reveals two other astounding facts about this system. First, worker ants from afflicted colonies selflessly identify and remove infected ants and dump their bodies far away, reflecting intergenerational genetic training as well as the ability to gather and process the information necessary to make the diagnosis and act on it. And second, there are many, many cordyceps species, each evolved to prey upon only one species, reflecting a pattern of co-evolution between host and parasite.

This led me to reading about colony defenses in general, including not just ants but things like wasps and termites that leave chemical protection for future generations, and bees getting together to make hive fevers to ward off parasitic infections. I don’t find a video of exactly a hive fever, but this one is similar: It’s bees using their collective body temperature to cook a predatory hornet to death:

Incredible. That got me thinking about how information management and dissemination is vital to colony-level defenses against parasites. They need to process and transmit information to work together in the arms race against parasites (especially viruses) that usually evolve much more rapidly than they do.

And you may know where this is going: How the US failed against SARS-CoV-2. In an information arms-race, life and death struggle against a parasitic virus that mutates exponentially faster than we can react — who knows how many experimental trials it took to design SARS-CoV-2? — this kind of efficient information system is what we need. And it worked in some ways, as humanity identified the virus and shared the data and code necessary to take action against it. But clearly we failed in other ways — communicating with our fellow citizens, dislodging the disinformation and misinformation that clouded their understanding and led so many to sacrifice themselves at the behest of a corrupt political organization and its demented leader.

Is this social evolution, I asked (despairingly), in which the Chinese system of government proves its superiority for survival at the colony level, while the US democratic system chokes on its own infected lungs. Worse, is the virus programming us to exacerbate our own weaknesses — yanking our social media chains and our slavery-era political institutions, like the rabies virus, which infects the brain and then explodes out through the salivary glands of a zombified attack animal. Colonies of ants rise or fall based on how they respond to parasites, which themselves are evolving to control ant behavior, as they evolve together. How exceptional are humans? Maybe we just do it faster, in social evolutionary time, rather than across many generations of breeding. Fascinating, but kind of dark. lol.

Anyway, naturally my concern is with information systems and scholarly communication. How human success against the virus has come from the rapid generation and dissemination of science and public health information (including preprints and data sharing). And failure came from disinformation and information corruption. Dr. Birx in the role the rabid raccoon, watching herself lose her grip on scientific reality as the authoritarian leader douses the public health information system with bleach and sets it on fire with an ultraviolet ray gun “inside the body.”

So I wrote a short paper titled, “Host, parasite, and failure at the colony level: COVID-19 and the US information ecosystem,” and posted it on SocArXiv: socarxiv.org/4hgam.* It includes this table:

hpit2


* I barely took high school biology. In college I took “Climate and Man,” and “Biology of Human Affairs.” That’s pretty much it for my life sciences training, so don’t take my word for it. Comments welcome.

Family Demography Seminar syllabus, 2021 edition

PN Cohen photo: https://flic.kr/p/2jw1ZhA.

This week it’s back to teaching Family Demography, a graduate seminar in the sociology department. This year a majority of the students are from other departments around campus, and of course the whole thing will be online. So we’ll see! I added a few weeks of pandemic related readings. And some things I never read before. Feel free to follow along. Feedback welcome.

This is the schedule, with readings. A lot of them are paywalled, I’m sorry to say, but you might have access to them. (You can always try sci-hub, which has stolen most academic articles for you, so you don’t have to steal them yourself.) 

Family Demography

January 27

Introduction

Cohen, Philip N. 2021. “The Pandemic and The Family.” Supplement to The Family: Diversity, Inequality, and Social Change (3e). New York: W. W. Norton & Company. 

February 3

Theoretical perspectives in demography

Bianchi, Suzanne M. 2014. “A Demographic Perspective on Family Change.” Journal of Family Theory & Review 6 (1): 35–44. https://doi.org/10.1111/jftr.12029. (preprint: http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC4465124&blobtype=pdf).

Sigle, Wendy. 2016. “Why Demography Needs (New) Theories.” In Changing Family Dynamics and Demographic Evolution: The Family Kaleidoscope, edited by Dimitri Mortelmans, Koenraad Matthijs, Elisabeth Alofs, and Barbara Segaert. Cheltenham, UK: Edward Elgar Publishing. http://eprints.lse.ac.uk/86429/1/Sigle_Demography%20needs%20theories_2018.pdf.

Cohen, Philip N. 2021. The Family: Diversity, Inequality, and Social Change (3e). New York: W. W. Norton & Company. Chapter 1, “A Sociology of the Family.”

February 10

Demographic transition

Thornton, Arland. 2001. “The Developmental Paradigm, Reading History Sideways, and Family Change.” Demography 38 (4): 449–65. https://doi.org/10.2307/3088311.

Bongaarts, John. 2009. “Human Population Growth and the Demographic Transition.” Philosophical Transactions of the Royal Society B-Biological Sciences 364(1532):2985–90. 10.1098/rstb.2009.0137.

Pande, Rohini Prabha, Sophie Namy, and Anju Malhotra. 2020. “The Demographic Transition and Women’s Economic Participation in Tamil Nadu, India: A Historical Case Study.” Feminist Economics 26(1):179–207. https://umd.instructure.com/files/60782517/

February 17

Second demographic transition

Sassler, Sharon, and Daniel T. Lichter. 2020. “Cohabitation and Marriage: Complexity and Diversity in Union-Formation Patterns.” Journal of Marriage and Family 82(1):35–61. https://doi.org/10.1111/jomf.12617.

Cohen, Philip N. 2011. “Homogamy Unmodified.” Journal of Family Theory & Review 3 (1): 47–51.

Schneider, Daniel, Kristen Harknett, and Matthew Stimpson. 2018. “What Explains the Decline in First Marriage in the United States? Evidence from the Panel Study of Income Dynamics, 1969 to 2013.” Journal of Marriage and Family 80(4):791–811. https://doi.org/10.1111/jomf.12481.

Zaidi, Batool, and S. Philip Morgan. 2017. “The Second Demographic Transition Theory: A Review and Appraisal.” Annual Review of Sociology 43(1):473–92. https://10.1146/annurev-soc-060116-053442.

February 24

U.S. History

Ruggles. Steven. 2015. “Patriarchy, Power, and Pay: The Transformation of American Families, 1800-2015.” Demography 52: 1797-1823. (His lecture version at PAA.)

Bloome, Deirdre, and Christopher Muller. 2015. “Tenancy and African American Marriage in the Postbellum South.” Demography 52 (5): 1409–30. https://doi.org/10.1007/s13524-015-0414-1.

Cherlin, Andrew J. 2020. “Degrees of Change: An Assessment of the Deinstitutionalization of Marriage Thesis.” Journal of Marriage and Family 82(1):62–80. https://doi.org/10.1111/jomf.12605.

Cohen, Philip N. 2021. The Family: Diversity, Inequality, and Social Change (3e). New York: W. W. Norton & Company. Chapter 2, “History.” 

March 3 [FIRST PAPER DUE]

U.S. Today

Guzzo, Karen Benjamin, and Sarah R. Hayford. 2020. “Pathways to Parenthood in Social and Family Contexts: Decade in Review, 2020.” Journal of Marriage and Family 82(1):117–44. https://doi.org/10.1111/jomf.12618.

Goldscheider, Frances, Eva Bernhardt, and Trude Lappegard. 2015. “The Gender Revolution: A Framework for Understanding Changing Family and Demographic Behavior.” Population and Development Review 41 (2): 207–+. doi:10.1111/j.1728-4457.2015.00045.x.

Smock, Pamela J., and Christine R. Schwartz. 2020. “The Demography of Families: A Review of Patterns and Change.” Journal of Marriage and Family 82(1):9–34. doi: 10.1111/jomf.12612. https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jomf.12612.

March 10

Pandemic fertility

Currie, Janet, and Hannes Schwandt. 2014. “Short- and Long-Term Effects of Unemployment on Fertility.” Proceedings of the National Academy of Sciences 111 (41): 14734–39. doi:10.1073/pnas.1408975111.

Luppi, Francesca, Bruno Arpino, and Alessandro Rosina. 2020. “The Impact of COVID-19 on Fertility Plans in Italy, Germany, France, Spain, and the United Kingdom.” Demographic Research 43(47):1399–1412. doi: 10.4054/DemRes.2020.43.47.

Wilde, Joshua, Wei Chen, and Sophie Lohmann. 2020. COVID-19 and the Future of US Fertility: What Can We Learn from Google? Working Paper. 13776. IZA Discussion Papers. https://www.econstor.eu/handle/10419/227303

Wagner, Sander, Felix C. Tropf, Nicolo Cavalli, and Melinda C. Mills. 2020. “Pandemics, Public Health Interventions and Fertility: Evidence from the 1918 Influenza.” https://osf.io/preprints/socarxiv/f3hv8/

March 17

Spring Break

March 24

COVID-19 and race/ethnic inequality

Sweeney, Megan M., and R. Kelly Raley. 2014. “Race, Ethnicity, and the Changing Context of Childbearing in the United States.” Annual Review of Sociology 40:539–58.

Hardy, Bradley L., and Trevon D. Logan. 2020. “Racial Economic Inequality amid the COVID-19 Crisis.” The Hamilton Project, Essay 17:2020. https://www.brookings.edu/wp-content/uploads/2020/08/EA_HardyLogan_LO_8.12.pdf

Vargas, Edward D., and Gabriel R. Sanchez. 2020. “COVID-19 Is Having a Devastating Impact on the Economic Well-Being of Latino Families.” Journal of Economics, Race, and Policy 3(4):262–69. 10.1007/s41996-020-00071-0.

Snowden, Lonnie R., and Genevieve Graaf. 2021. “COVID-19, Social Determinants Past, Present, and Future, and African Americans’ Health.” Journal of Racial and Ethnic Health Disparities 8(1):12–20. 10.1007/s40615-020-00923-3.

Reinhart, Eric, and Daniel L. Chen. 2020. “Incarceration and Its Disseminations: COVID-19 Pandemic Lessons From Chicago’s Cook County Jail.” Health Affairs 39(8):1412–18. 10.1377/hlthaff.2020.00652.

March 31

China and fertility policy

Bongaarts, John, and Christophe Z. Guilmoto. 2015. “How Many More Missing Women? Excess Female Mortality and Prenatal Sex Selection, 1970–2050.” Population and Development Review 41 (2): 241–69. doi:10.1111/j.1728-4457.2015.00046.x.

Wang Feng, Baochang Gu, and Yong Cai. 2016. “The End of China’s One-Child Policy.” Studies in Family Planning 47 (1): 83–86. doi:10.1111/j.1728-4465.2016.00052.x.

Shen, Ke, Feng Wang, and Yong Cai. 2020. “Government Policy and Global Fertility Change: A Reappraisal.” Asian Population Studies 16(2):145–66. https://umd.instructure.com/files/60812839/

Wang, Feng. 2017. “Is Rapid Fertility Decline Possible? Lessons from Asia and Emerging Countries.” Pp. 435–51 in Africa’s population: In search of a demographic dividend. Springer. https://umd.instructure.com/files/60848754/

April 7 [SECOND PAPER DUE]

Divorce

Kennedy, Sheela, and Steven Ruggles. 2014. “Breaking Up Is Hard to Count: The Rise of Divorce in the United States, 1980–2010.” Demography 51 (2): 587–98.

Cohen, Philip N. 2019. “The Coming Divorce Decline.” Socius 5:2378023119873497. 10.1177/2378023119873497.

Raley, R. Kelly, and Megan M. Sweeney. 2020. “Divorce, Repartnering, and Stepfamilies: A Decade in Review.” Journal of Marriage and Family 82(1):81–99. https://doi.org/10.1111/jomf.12651.

April 14

Policy, race, and nonmarital births

England, Paula. 2016. “Sometimes the Social Becomes Personal: Gender, Class, and Sexualities.” American Sociological Review 81 (1): 4–28.

Cohen, Philip N. 2015. “Maternal Age and Infant Mortality for White, Black, and Mexican Mothers in the United States.” Sociological Science 3 (January): 32–38.

Geronimus, Arline T. 2003. “Damned If You Do: Culture, Identity, Privilege, and Teenage Childbearing in the United States.” Social Science & Medicine 57 (5): 881–93.

Cohen, Philip N. 2018. Enduring Bonds: Families and Modern Inequality, Chapter: “Marriage promotion [Excerpts]” 24pp. [to be provided]

Smith, Imari Z., Keisha L. Bentley-Edwards, Salimah El-Amin and William Darity, Jr. “Fighting at Birth: Eradicating the Black-White Infant Mortality Gap.” Samuel DuBois Cook Center on Social Equity and Insight Center for Community Economic Development. https://socialequity.duke.edu/wp-content/uploads/2019/12/Eradicating-Black-Infant-Mortality-March-2018.pdf

April 21

More U.S. inequality issues

Brady, David, Ryan M. Finnigan, and Sabine Hübgen. 2017. “Rethinking the Risks of Poverty: A Framework for Analyzing Prevalences and Penalties.” American Journal of Sociology 123 (3): 740–86. https://doi.org/10.1086/693678.

Enns, Peter K., Youngmin Yi, Megan Comfort, Alyssa W. Goldman, Hedwig Lee, Christopher Muller, Sara Wakefield, Emily A. Wang, and Christopher Wildeman. 2019. “What Percentage of Americans Have Ever Had a Family Member Incarcerated?: Evidence from the Family History of Incarceration Survey (FamHIS).” Socius 5:2378023119829332. doi: 10.1177/2378023119829332.

Cooper, Marianne, and Allison J. Pugh. 2020. “Families Across the Income Spectrum: A Decade in Review.” Journal of Marriage and Family 82(1):272–99. https://doi.org/10.1111/jomf.12623.

April 28

Family structure and child wellbeing

Regnerus, Mark. 2012. “How Different Are the Adult Children of Parents Who Have Same-Sex Relationships? Findings from the New Family Structures Study.” Social Science Research 41 (4): 752–70. doi:10.1016/j.ssresearch.2012.03.009.

Rosenfeld, Michael J. 2015. “Revisiting the Data from the New Family Structure Study: Taking Family Instability into Account.” Sociological Science 2 (September): 478–501. doi:10.15195/v2.a23.

Cohen, Philip N. 2018. Enduring Bonds: Families and Modern Inequality, Chapter: “Marriage equality in social science and the courts.” 19pp. [to be provided]

Perkins, Kristin L. 2019. “Changes in Household Composition and Children’s Educational Attainment.” Demography, January. https://doi.org/10.1007/s13524-018-0757-5.

May 5 [THIRD PAPER DUE]

Maternal mortality

MacDorman, Marian F., Eugene Declercq, and Marie E. Thoma. 2017. “Trends in Maternal Mortality by Socio-Demographic Characteristics and Cause of Death in 27 States and the District of Columbia.” Obstetrics and Gynecology 129 (5): 811–18. https://doi.org/10.1097/AOG.0000000000001968.

MacDorman, Marian F., Eugene Declercq, and Marie E. Thoma. 2018. “Trends in Texas Maternal Mortality by Maternal Age, Race/Ethnicity, and Cause of Death, 2006-2015.” Birth 45 (2): 169–77. https://doi.org/10.1111/birt.12330.

McMillan Cottom, Tressie. 2019. “Why Are Pregnant Black Women Viewed as Incompetent?” Time. January 8, 2019. http://time.com/5494404/tressie-mcmillan-cottom-thick-pregnancy-competent/.

Molina, Rose L., and Lydia E. Pace. 2017. “A Renewed Focus on Maternal Health in the United States.” New England Journal of Medicine 377 (18): 1705–7. https://doi.org/10.1056/NEJMp1709473.

For reference: World Health Organization. 2014. “Trends in maternal mortality: 1990 to 2013. Estimates by WHO, UNICEF, UNFPA, The World Bank and the United Nations Population Division.” http://documents.worldbank.org/curated/en/937281468338969369/pdf/879050PUB0Tren00Box385214B00PUBLIC0.pdf.

Policy implications are discussed (often to poor effect, in sociology journals)

Commentary, data, suggestions.

Watch where you’re going. (PNC photo: https://flic.kr/p/2gRHfd5.)

The ritualistic invocation of “policy implications” in sociology writing is puzzling. I don’t know its origin, but it appears to have come (like so much else that we cherish because we despise ourselves) from economists. The Quarterly Journal of Economics was the first (in the JSTOR database) to use the term in an abstract, including it 11 times over the 1950s and 1960s before the first sociology journal (Journal of Health and Social Behavior) finally followed suit in 1971.

That 1971 article projected a tone that persists to this day. In a paragraph tacked onto the end of the paper, Kohn and Mercer speculated that inflated claims about the dangers of marijuana “may actually contribute to dangerous forms of drug abuse among less well-educated youth” (although the paper was a survey of college students). “If this is the case,” they continued, “then the best corrective may be to revise law, social policy, and official information in line with the best current scientific knowledge about drugs and their effect.” The analysis in the paper had nothing to do with anti-drug policy, instead pursuing an interesting empirical examination of the relationship of ideology (rebellious versus authoritarian) and drug use. The “implications” are vague and unconnected to any actually-existing policy debate (and none is cited). Being in this case both banal and hopelessly idealistic — intellectual bedfellows that find themselves miserably at home in the sociological space many in the public deride as “academic” — it’s hard to imagine the paper having any policy effect. Not that there’s anything wrong with that.

Fifty years later, “policy implications” has become an institution in academic sociology — by no means universal, but a fixed feature of the landscape, demanded by some editors, reviewers, advisors, and funders. The prevalence of this trope coincides with the imperative for “engagement” (which I’ve written about previously) driven both by our internal sense of mission and our capitulation to external pressure to justify the existence of our work. These are admirable impulses, but they’re poorly served by many of our current practices. I hope this discussion of “policy implications,” and the suggestions that follow, help push us toward more productive responses.

How it’s done

Most sociologists don’t do a lot of policy work. It’s not our language or social or professional milieu, and often not part of our formal training. So what do we mean, in theory and practice, when we offer “policy implications” for our research? There is a very wide range of applications, from evaluations of specific local policies to critiques of state power itself. I collected a lot of examples which I’ll describe, but first a very prominent one, from “Social Conditions as Fundamental Causes of Health Inequalities: Theory, Evidence, and Policy Implications, by Phelan, Link, and Tehranifar (2010). Their promise of policy implications is right in the title. From the policy implications section, here is a list of policies intended to reduce inequality in social conditions:

“Policies relevant to fundamental causes of disease form a major part of the national agenda, whether this involves the minimum wage, housing for homeless and low-income people, capital-gains and estate taxes, parenting leave, social security, head-start programs and college-admission policies, regulation of lending practices, or other initiatives of this type.”

Then, in the conclusion, they explain that in addition to leveling inequalities in social condition, we need policies that “minimiz[e] the extent to which socioeconomic resources buy a health advantage” — in the U.S. context, this is interpretable as universal healthcare.

These are almost broad enough — considered together — to constitute a worldview (or perhaps a party platform) rather than a specific policy prescription. If this were actual policy analysis, we would have to be concerned with, for example, the extent which policies to raise the minimum wage, raise taxes, house the homeless, and expand educational opportunity actually produce reductions in inequality, and which of these is most effective, or important, or feasible, and so on. But this is not policy analysis, and none is cited. These are one step down from documenting wage disparities and offering socialism as “policy implications.” This is a review paper, mostly theory and summarizing existing evidence — which makes it more suitable than the implications attached to many narrow empirical papers (see below). It has been very influential, influencing thousands of students and researchers, and maybe people in policy settings as well (one could try to assess that), by helping to establish the connection between health inequality and inequality on other dimensions. Important work. But the way I read the term, this is too broad to be reduced to “policy implications” — it’s more like social implications, or theoretical implications.

127 more examples

To generalize about the practice of “policy implications,” I collected some data. I used a “topic” search in Web of Science, which searches title, abstract, and keywords, for the phrase “policy implications,” in articles from 2010 to 2020. This tree map from WoS shows the disciplinary breakdown of the journals with the search term, which remains dominated by economics.

I chose the sociology category, then weeded out journals that were very interdisciplinary (like Journal of Marriage and Family), and some articles that turned out to be false positives, and ended up with 127 articles in these 52 journals.*

First I read all the abstracts and came up with a three-category code for abstracts that (1) had specific policy implications, (2) made general policy pronouncements, or (3) just promised policy implications. Here are some details.

Of the 127 abstracts, only two had what I read as specific policy implications. Martin (2018) wrote, “for dietary recommendations to transform eating practices it is necessary to take into account how, while cooking, actors draw on these various forms of valuation.” And Andersen and van de Werfhorst (2010) wrote, “strengthening the skill transparency of the education system by increasing secondary and tertiary-level differentiation may strengthen the relationship between education and occupation.” These aren’t as specific as particular pieces of legislation or policies, but close enough.

I put 29 papers in the general pronouncements category. For example, I put Phalen, Link, and Tehranifar (2010) in this category. In another, Wiborg and Hansen (2018) wrote that their findings implied that “increasing equal educational opportunities do not necessarily lead to greater opportunities in the labor market and accumulation of wealth” (reading inside the paper confirmed this is the extent of the discussion). This by Stoilova, Ilieva-Trichkova, and Bieri (2020) is archetypal: “The policy implications are to more closely consider education in the transformation of gender-sensitive norms during earlier stages of child socialization and to design more holistic policy measures which address the multitude of barriers individuals from poor families and ethnic/migrant background face” (reading inside the paper, there are several other statements at the same level). I read three other papers in this category and found similar general implications, e.g., “if the policy goal is to enhance the bargaining position of labour and increase its share of income, spending policy should prioritise the expenditures on the public sector employment” (Pensiero 2017).

“Policy implications are discussed”

The largest category, 97 papers (76%) offered no policy implications in the abstract, but rather offered some version of “policy implications are discussed.” It is an odd custom, to mention the existence of a section in the paper without divulging its contents. Anyway, to get a better sense of what “policy implications are discussed” means, I randomly sampled 10 of the papers in this category to read the relevant section. (I have no beef with these papers or their authors, they were selected randomly, and I’m only commented on what may be the least important aspect of their contributions.)

The first category among these, with 5 of the 10 papers, are those without substantive policy contributions. Some have banal statements at the end, which the author and most readers probably already believed, such as, “If these results are replicated, programs should be implemented that will solicit the help of grandparents in addition to parents” (Liu 2016). I also include here Visser et al. (2013), who conclude that their “findings show general support for basic ecological perspectives of fear of crime and feelings of unsafety,” e.g., that reducing crime in the absence of better social protection will not improve levels of fear and feelings of unsafety. I code this one as without substantive policy contribution because that’s a big claim about the entire state policy structure, which would require much more evidence to adjudicate, much less implement, and the paper offers only a small empirical nudge in one direction (which, again, is fine!).

Several in this category offered essentially no policy implications. This includes Wang (2010) who states at the outset that, “the question of motives for private transfers is one with important policy implications” for public transfer programs like food stamps and social security, but never comes back to discuss policies relevant to the results. And Barrett and Pollack (2011), who recommend that health practitioners develop better understanding of the issues raised and that “contemporary sexual civil rights efforts” pay more attention to sexual discrimination. Finally, Lepianka (2015), reports on media depictions of poverty and related policy, but doesn’t offer any implications of the study itself for policy. So, half of these had abstracts that were overpromising in terms of policy.

The other 5 papers do include substantive policy implications, explored to varying degrees. One is hard-hitting but brief: Shwed et al. (2018), whose analysis has direct implications which they do not thoroughly discuss. Their “unequivocal” result is that “multicultural schools, with their celebration of difference, entail a cost in terms of social integration compared to assimilationist schools—they enhance ethnic segregation in friendship networks. … While empowering minorities, it enhances social closure between groups.” The empirical analysis they did could no doubt be used as part of a policy analysis on the question of cultural orientation of schools in Israel.

Three offer sustained policy discussions, including the very specific: an endorsement of prison-based dog training programs (Antonio et al. 2017); a critique of sow-housing policy in the European Union (de Krom 2015); and recommendations for environmental lending practices at the World Bank (Sommer at al. 2017). The last one qualifies, albeit at a very macro level: Gauchat et al.’s (2011) analysis of economic dependence on military spending in metropolitan areas, the implications of which surpass everyday policy debates but are of course relevant.

To summarize my reading, with percentages based on extrapolating my subsample (so, wide confidence intervals): 23% of papers promising policy implications had none, and 38% had either vague statements or general statements that did not rely on empirical findings in the paper. The remaining 40% had substantive policy discussion and/or specific recommendations.

This is a quick coding and not validated. Others might treat differently papers that report an effect and then recommend changing the prevalence of the independent variable — e.g., poverty causes poor health; policies should reduce poverty — which I coded as not substantive. For example, I coded this from Thoits (2010) as not substantive or specific: “policies and programs should target children who are at long-term health risk due to early exposure to poverty, inadequate schools, and stressful family circumstances.” You could say, “policies should attempt to make life better,” but it’s not clear you need research for that. Anyway, my own implications (below) don’t depend on a precise accounting.

Implications

I am really, really not saying these are bad papers, or wrong to do what they did. I am not criticizing them, but rather the institutional convention that classifies the attempt to make our research relevant as “policy implications,” even when we have nothing specific to say about real policies, and then rewards sociologists for shoehorning their conclusions into such a frame.

Let me give an example of an interesting and valuable paper that is burdened by its policy implications. “The impact of parental migration on depression of children: new evidence from rural China,” by Yue et al. (2020) used a survey of families in China to assess the relationship between parental migration, remittances, household labor burdens, parent-child communication, and children’s symptoms of depression. After regression models with direct and indirect effects on children’s depression, including both children who were “left behind” by migrating parents and those who weren’t, they conclude: “non-material resources (parent-child communication, parental responsiveness, and self-esteem) play a much more important role in child’s depression than material resources (remittances).” Interesting result. Seems well done, with good data. The policy suggestions that follow are to encourage parent-child communication (e.g., through local government programs) and teach children in school that they are not abandoned by parents who migrate.

What is wrong with this? First, Yue et al. (2020) is an example of a common model that amounts to, “based on our regressions, more of this variable would be good.” It seems logical, but a serious approach to the question would have to be based on evidence that such programs actually have their intended effect, and that they would be better than directing the money or other resources toward something else. That would be an unreasonable burden for the authors, and slow the production of useful empirical results. So we’re left with something superficial that distracts more than it adds. Further (and here I hope to win some converts to my view), these policy implication sections are a major source of of peer review friction — reviewers demanding them, reviewers hating them, authors contorting themselves, and so on. Much better, in my view, would be to just add the knowledge produced by papers like this to the great hopper of knowledge, and let it contribute to a real policy analysis down the road.

Empirical peer-reviewed sociology articles should be shorter, removing non-essential parts of the paper that are major sources of peer review bog-down. Having different kinds of work reviewed and approved together in a single paper — a lengthy literature review, a theoretical claim, an empirical analysis, and a set of policy implications — creates inefficiencies in the peer review process. Why should a whole 60-page paper be rejected because one part of it (the policy implications, say) is rejected by one out of three reviewers? This is very wasteful. It puts reviewers in a position to review aspects of the work they aren’t qualified to judge. And it skews incentives by rewarding the less important parts of our work. Of course it’s reasonable to spend a few paragraphs stating the relevance of the question in the paper, but not a whole treatise (in the front and back) of every paper.

Advice for sociologists

1. Don’t try to pin big conclusions on a single piece of peer reviewed empirical research. That’s a sad legacy of a time when publishing was hard, sociologists had few opportunities to do so, and peer reviewed journals were the source of validation we were expected to rely on. So you devoted years of your life to a small number of “publications,” and those were the sum total of your intellectual production. We have a lot of other ways to express our social and political views now, and we should use them. The fact that you have a PhD, a job, and have published peer reviewed research, are all sources of legitimacy you can draw on to get people to pay attention to your writing.

2. Write for the right audience. If you are serious about influencing policy, write for staffers doing research for advocacy organizations, activists, or campaigns. If you want to influence the public, write in lay terms in venues that draw regular people as readers. If you want to set the agenda for funding agencies, write review pieces that synthesize research and make the case for moving in the right direction. These are all different kinds of writing, published in different venues. Crucially, none of them rely only on the empirical results of a single analysis, nor should they. The last three paragraphs of your narrow empirical research paper — excellent, important, and cutting-edge as it is — will not reach these different audiences.

3. Stop asking researchers to tack superficial policy implications sections onto the end of their papers. If you are a reviewer or an editor, stop demanding longer literature reviews and conclusions. Start rewarding the most important part of the work, the part you are qualified to evaluate.

4. If you are in an academic department, on a hiring committee, or on a promotion and tenure committee, look at the whole body of work, including the writing outside peer-reviewed journals. No one expects to get tenure from writing an op-ed, but people who work to reach different audiences may be building a successful career in which peer-reviewed research is a foundational building block. Look for the connections, and reward the people who make them.


*The full sample (metadata and abstracts) is available on Zotero. Some are open access, some I got through my library, but all are available from from Sci-Hub (which steals them so you don’t have to).

References mentioned in the text:

Andersen, Robert, and Herman G. van de Werfhorst. 2010. “Education and Occupational Status in 14 Countries: The Role of Educational Institutions and Labour Market Coordination.” British Journal of Sociology 61(2):336–55. doi: 10.1111/j.1468-4446.2010.01315.x.

Antonio, Michael E., Rosalyn G. Davis, and Susan R. Shutt. 2017. “Dog Training Programs in Pennsylvania’s Department of Corrections Perceived Effectiveness for Inmates and Staff.” Society & Animals 25(5):475–89. doi: 10.1163/15685306-12341457.

de Krom, Michiel P. M. M. 2015. “Governing Animal-Human Relations in Farming Practices: A Study of Group Housing of Sows in the EU.” Sociologia Ruralis 55(4):417–37. doi: 10.1111/soru.12070.

Gauchat, Gordon, Michael Wallace, Casey Borch, and Travis Scott Lowe. 2011. “The Military Metropolis: Defense Dependence in U.S. Metropolitan Areas.” City & Community 10(1):25–48. doi: 10.1111/j.1540-6040.2010.01359.x.

Kohn, Paul M., and G. W. Mercer. 1971. “Drug Use, Drug-Use Attitudes, and the Authoritarianism-Rebellion Dimension.” Journal of Health and Social Behavior 12(2):125–31. doi: 10.2307/2948519.

Lepianka, Dorota. 2015. “Images of Poverty in a Selection of the Polish Daily Press.” Current Sociology 63(7):999–1016. doi: 10.1177/0011392115587021.

Liu, Ruth X. 2018. “Physical Discipline and Verbal Punishment: An Assessment of Domain and Gender-Specific Effects on Delinquency Among Chinese Adolescents.” Youth & Society 50(7):871–90. doi: 10.1177/0044118X15618836.

Martin, Rebeca Ibanez. 2018. “Thinking with La Cocina: Fats in Spanish Kitchens and Dietary Recommendations.” Food Culture & Society 21(3):314–30. doi: 10.1080/15528014.2018.1451039.

Pensiero, Nicola. 2017. “In-House or Outsourced Public Services? A Social and Economic Analysis of the Impact of Spending Policy on the Private Wage Share in OECD Countries.” International Journal of Comparative Sociology 58(4):333–51. doi: 10.1177/0020715217726837.

Phelan, Jo C., Bruce G. Link, and Parisa Tehranifar. 2010. “Social Conditions as Fundamental Causes of Health Inequalities: Theory, Evidence, and Policy Implications.” Journal of Health and Social Behavior 51:S28–40. doi: 10.1177/0022146510383498.

Shwed, Uri, Yuval Kalish, and Yossi Shavit. 2018. “Multicultural or Assimilationist Education: Contact Theory and Social Identity Theory in Israeli Arab-Jewish Integrated Schools.” European Sociological Review 34(6):645–58. doi: 10.1093/esr/jcy034.

Sommer, Jamie M., John M. Shandra, and Michael Restivo. 2017. “The World Bank, Contradictory Lending, and Forests: A Cross-National Analysis of Organized Hypocrisy.” International Sociology 32(6):707–30. doi: 10.1177/0268580917722893.

Stoilova, Rumiana, Petya Ilieva-Trichkova, and Franziska Bieri. 2020. “Work-Life Balance in Europe: Institutional Contexts and Individual Factors.” International Journal of Sociology and Social Policy 40(3–4):366–81. doi: 10.1108/IJSSP-08-2019-0152.

Thoits, Peggy A. 2010. “Stress and Health: Major Findings and Policy Implications.” Journal of Health and Social Behavior 51:S41–53. doi: 10.1177/0022146510383499.

Visser, Mark, Marijn Scholte, and Peer Scheepers. 2013. “Fear of Crime and Feelings of Unsafety in European Countries: Macro and Micro Explanations in Cross-National Perspective.” Sociological Quarterly 54(2):278–301. doi: 10.1111/tsq.12020.

Wang, Jingshu. 2010. “Motives for Intergenerational Transfers: New Test for Exchange.” American Journal of Economics and Sociology 69(2):802–22. doi: 10.1111/j.1536-7150.2010.00725.x.

Wiborg, Oyvind N., and Marianne N. Hansen. 2018. “The Scandinavian Model during Increasing Inequality: Recent Trends in Educational Attainment, Earnings and Wealth among Norwegian Siblings.” Research in Social Stratification and Mobility 56:53–63. doi: 10.1016/j.rssm.2018.06.006.

Yue, Zhongshan, Zai Liang, Qian Wang, and Xinyin Chen. 2020. “The Impact of Parental Migration on Depression of Children: New Evidence from Rural China.” Chinese Sociological Review 52(4):364–88. doi: 10.1080/21620555.2020.1776601.

Don’t both-sides the war on truth

Glad to see political obituaries for Trump appearing. But don’t let them both-sides it. Case in point is George Packer’s “The Legacy of Donald Trump” in the Atlantic (online version titled, “A Political Obituary for Donald Trump“).

Packer is partly right in his comparison of Trump’s lies to those of previous presidents:

Trump’s lies were different. They belonged to the postmodern era. They were assaults against not this or that fact, but reality itself. They spread beyond public policy to invade private life, clouding the mental faculties of everyone who had to breathe his air, dissolving the very distinction between truth and falsehood. Their purpose was never the conventional desire to conceal something shameful from the public.

He’s right that the target is truth itself, but wrong to attribute this to postmodernism. Trump is well-grounded in modernist authoritarianism, albeit with contemporary cultural flourishes. This ground was well covered by Michiko Kakutani, Jason Stanley, and Adam Gopnik, who wrote the week before Trump’s inauguration:

there is nothing in the least “postmodern” about Trump. The machinery of demagogic authoritarianism may shift from decade to decade and century to century, taking us from the scroll to the newsreel to the tweet, but its content is always the same. Nero gave dictates; Idi Amin was mercurial. Instruments of communication may change; demagogic instincts don’t.

This distinction matters, between Trump the modern authoritarian and Trump the victim of a world gone mad. You can see why later in Packer’s piece, when he both-sides it:

Monopoly of public policy by experts—trade negotiators, government bureaucrats, think tankers, professors, journalists—helped create the populist backlash that empowered Trump. His reign of lies drove educated Americans to place their faith, and even their identity, all the more certainly in experts, who didn’t always deserve it (the Centers for Disease Control and Prevention, election pollsters). The war between populists and experts relieved both sides of the democratic imperative to persuade. The standoff turned them into caricatures.

Disagree. Public health scientists and political pollsters are sometimes wrong, and even corrupt, including during the Trump era, but their failures are not an assault on truth itself (I don’t know what about the CDC he’s referring to, but except for some behavior by Trump appointees the same applies). We in the rational knowledge business have not been relieved of our democratic imperatives by the machinations of authoritarians. No matter how we are seen by Trump’s followers, we are not caricatures. The rise of authoritarianism and its populist armies can’t be laid at the feet of the reign of experts. In one sense, of course, anti-vaxxers only exist because there are vaccines. But that’s not a both-sides story. Everyone alive today is alive because of the reign of experts, more of less.

This reminds me of Jonah Goldberg’s ridiculous (but very common, among conservatives) attempt to blame anti-racists for racism: “The grave danger, already materializing, is that whites and Christians respond to this bigotry [i.e., being called racist, homophobic, and Islamophobic] and create their own tribal identity politics.” If Packer objects to the comparison, that’s on him.

That said, the know-nothing movement that Trump now leads obviously creates direct challenges that the forces of truth must rise to meet. The imperative for “engagement” among social scientists — the need to communicate our research and its implications, which I’ve discussed before — is partly driven by this reality. In the social sciences we have an additional burden because our scholarship is directly relevant to politics, so compared with the other sciences we are subject to heightened scrutiny and suspicion — our accomplishments are less the invisible infrastructure of daily survival and more the contested terrain of social and cultural conflict.

And, judging by our falling social science enrollments (except economics), we’re not winning.

So we have a lot of work to do, but we’re not responsible for the war on truth.

Data analysis shows Journal Impact Factors in sociology are pretty worthless

The impact of Impact Factors

Some of this first section is lifted from my blockbuster report, Scholarly Communication in Sociology, where you can also find the references.

When a piece of scholarship is first published it’s not possible to gauge its importance immediately unless you are already familiar with its specific research field. One of the functions of journals is to alert potential readers to good new research, and the placement of articles in prestigious journals is a key indicator.

Since at least 1927, librarians have been using the number of citations to the articles in a journal as a way to decide whether to subscribe to that journal. More recently, bibliographers introduced a standard method for comparing journals, known as the journal impact factor (JIF). This requires data for three years, and is calculated as the number of citations in the third year to articles published over the two prior years, divided by the total number of articles published in those two years.

For example, in American Sociological Review there were 86 articles published in the years 2017-18, and those articles were cited 548 times in 2019 by journals indexed in Web of Science, so the JIF of ASR is 548/86 = 6.37. This allows for a comparison of impact across journals. Thus, the comparable calculation for Social Science Research is 531/271 = 1.96, and it’s clear that ASR is a more widely-cited journal. However, comparisons of journals in different fields using JIFs is less helpful. For example, the JIF for the top medical journal, New England Journal of Medicine, is currently 75, because there are many more medical journals publishing and citing more articles at higher rates, and more quickly than do sociology journals. (Or maybe NEJM is just that much more important.)

In addition to complications in making comparisons, there are problems with JIFs (besides the obvious limitation that citations are only one possible evaluation metric). They depend on what journals and articles are in the database being used. And they mostly measure short-term impact. Most important for my purposes here, however, is that they are often misused to judge the importance of articles rather than journals. That is, if you are a librarian deciding what journal to subscribe to, JIF is a useful way of knowing which journals your users might want to access. But if you are evaluating a scholar’s research, knowing that they published in a high-JIF journal does not mean that their article will turn out to be important. It is especially wrong to look at an article that’s old enough to have citations you could count (or not) and judge its quality by the journal it’s published in — but people do that all the time.

To illustrate this, I gathered citation data from the almost 2,500 articles published in 2016-2019 in 15 sociology journals from the Web of Science category list.* In JIF these rank from #2 (American Sociological Review, 6.37) to #46 (Social Forces, 1.95). I chose these to represent a range of impact factors, and because they are either generalist journals (e.g., ASR, Sociological Science, Social Forces) or sociology-focused enough that almost any article they publish could have been published in a generalist journal as well. Here is a figure showing the distribution of citations to those articles as of December 2020, by journal, ordered from higher to lower JIF.

After ASR, Sociology of Education, and American Journal of Sociology, it’s hard to see much of a slope here. Outliers might be playing a big role (for example that very popular article in Sociology of Religion, “Make America Christian Again: Christian Nationalism and Voting for Donald Trump in the 2016 Presidential Election,” by Whitehead, Perry, and Baker in 2018). But there’s a more subtle problem, which is the timing of the measures. My collection of articles is 2016-2019. The JIFs I’m using are from 2019, based on citations to 2017-2018 articles. These journals bounce around; for example, Sociology of Religion jumped from 1.6 to 2.6 in 2019. (I address that issue in the supplemental analysis below.) So what is a lazy promotion and tenure committee, which is probably working off a mental reputation map at least a dozen years old, to do?

You can already tell where I’m going with this: In these sociology journals, there is so much noise in citation rates within the journals, compared to any stable difference between them, that outside the very top the journal ranking won’t much help you predict how much a given paper will be cited. If you assume a paper published in AJS will be more important than one published in Social Forces, you might be right, but if the odds that you’re wrong are too high, you just shouldn’t assume anything. Let’s look closer.

Sociology failure rates

I recently read this cool paper (also paywalled in the Journal of Informetrics) that estimates the odds of this “failure probability,” the odds that your guess about which paper will be more impactful based on the journal title turns out to be wrong. When JIFs are similar, the odds of an error are very high, like a coin flip. “In two journals whose JIFs are ten-fold different, the failure probability is low,” Brito and Rodríguez-Navarro conclude. “However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.”

Their formulas look pretty complicated to me, so for my sociology approach I just did it by brute force (or if you need tenure you could call it a Monte Carlo approach). I randomly sampled 100,000 times from each possible pair of journals, then calculated the percentage of times the article with more citations was from a journal with a higher impact factor. For example, in 100,000 comparisons of random pairs sampled from ASR and Social Forces (the two journals with the biggest JIF spread), 73% of the time the ASR article had more citations.

Is 73% a lot? It’s better than a coin toss, but I’d hate to have a promotion or hiring decision be influenced by an instrument that blunt. Here are results of the 10.5 million comparisons I made (I love computers). Click to enlarge:

Outside of the ASR column, these are very bad; in the ASR column they’re pretty bad. For example, a random article from AJS only has more citations than one from the 12 lower-JIF journals 59% of the time. So if you’re reading CVs, and you see one candidate with a two-year old AJS article and one with a two-year-old Work & Occupations article, what are you supposed to do? You could compare the actual citations the two articles have gotten, or you could assess their quality of impact some other way. You absolutely should not just skim the CV and assume the AJS article is or will be more influential based on the journal title alone; the failure probability of that assumption is too high.

On my table you can also see some anomalies, of the kind which plague this system. See all that brown in the BJS and Sociology of Religion columns? That’s because both of those journals had sudden increases in their JIF, so their more recent articles have more citations, and most of the comparisons in this table (like in your memory, probably) are based on data from a few years before that. People who published in these journals three years ago are today getting an undeserved JIF bounce from having these titles on their CVs. (See the supplemental analysis below for more on this.)

Conclusion

Using JIF to decide which papers in different sociology journals are likely to be more impactful is a bad idea. Of course, lots of people know JIF is imperfect, but they can’t help themselves when evaluating CVs for hiring or promotion. And when you show them evidence like this, they might say “but what is the alternative?” But as Brito & Rodríguez-Navarro write: “if something were wrong, misleading, and inequitable the lack of an alternative is not a cause for continuing using it.” These error rates are unacceptably high.

In sociology most people won’t own up to relying on impact factors, but most people (in my experience) do judge research by where it’s published all the time. If there is a very big difference in status — enough to be associated with an appreciably different acceptance rate, for example — that’s not always wrong. But it’s a bad default.

In 2015 the biologist Michael Eisen suggested that tenured faculty should remove the journal titles from their CVs and websites, and just give readers the title of the paper and a link to it. He’s done it for his lab’s website, and I urge you to look at it just to experience the weightlessness of an academic space where for a moment overt prestige and status markers aren’t telling you what to think. I don’t know how many people have taken him up on it. I did it for my website, with the explanation, “I’ve left the titles off the journals here, to prevent biasing your evaluation of the work before you read it.” Whatever status I’ve lost I’ve made up for in virtue-signaling self-satisfaction — try it! (You can still get the titles from my CV, because I feel like that’s part of the record somehow.)

Finally, I hope sociologists will become more sociological in their evaluation of research — and of the systems that disseminate, categorize, rank, and profit from it.

Supplemental analysis

The analysis thus far is, in my view, a damning indictment of real-world reliance on the Journal Impact Factor for judging articles, and thus the researchers who produce them. However, it conflates two problems with the JIF. First is the statistical problem of imputing status from an aggregate to an individual, when the aggregate measure fails to capture variation that is very wide relative to the difference between groups. Second, more specific to JIF, is the reliance on a very time-specific comparison: citations in year three to publications in years one and two. Someone could do (maybe already has) an analysis to determine the best lag structure for JIF to maximize its predictive power, but the conclusions from the first problem imply that’s a fool’s errand.

Anyway, in my sample the second problem is clearly relevant. My analysis relies strictly on the rank-ordering provided by the JIF to determine whether article comparisons succeed or fail. However, the sample I drew covers four years, 2016-2019, and counts citations to all of them through 2020. This difference in time window produces a rank ordering that differs substantially (the rank order correlation is .73), as you can see:

In particular, three journals (BJS, SOR, and SFO) moved more than five spots in the ranking. A glance at the results table above shows that these journals are dragging down the matching success rate. To pull these two problems apart, I repeated the analysis using the ranking produced within the sample itself.

The results are now much more straightforward. First, here is the same box plot but with the new ordering. Now you can see the ranking more clearly, though you still have to squint a little.

And in the match rate analysis, the result is now driven by differences in means and variances rather than by the mismatch between JIF and sample-mean rankings (click to enlarge):

This makes a more logical pattern. The most differentiated journal, ASR, has the highest success rate, and the journals closest together in the ranking fail the most. However, please don’t take from this that such a ranking becomes a legitimate way to judge articles. The overall average on this table is still only 58%, up only 4 points from the original table. Even with a ranking that more closely conforms to the sample, this confirms Brito and Rodríguez-Navarro’s conclusion: “[when rankings] of the journals are not so different … the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.”

These match numbers are too low to responsibly use in such a way. These major sociology journals have citation rates that are too variable, and too similar at the mean, to be useful as a way to judge articles. ASR stands apart, but only because of the rest of the field. Even judging an ASR paper against its lower-ranked competitors produces a successful one-to-one ranking of papers just 72% of the time — and that only rises to 82% with the least-cited journal on the list.

The supplemental analysis is helpful for differentiating the multiple problems with JIF, but it does nothing to solve the problem of using journal citation rates to evaluate individual articles.


*The data and Stata code I used is up here: osf.io/zutws. This includes the lists of all articles in the 15 journals from 2016 to 2020 and their citation counts as of the other day (I excluded 2020 papers from the analysis, but they’re in the lists). I forgot to save the version of the 100k-case random file that I used to do this, so I guess that can never be perfectly replicated; but you can probably do it better anyway.

COVID-19 mortality rates by race/ethnicity and age

Why are there such great disparities in COVID-19 deaths across race/ethnic groups in the U.S.? Here’s a recent review from New York City:

The racial/ethnic disparities in COVID-related mortality may be explained by increased risk of disease because of difficulty engaging in social distancing because of crowding and occupation, and increased disease severity because of reduced access to health care, delay in seeking care, or receipt of care in low-resourced settings. Another explanation may be the higher rates of hypertension, diabetes, obesity, and chronic kidney disease among Black and Hispanic populations, all of which worsen outcomes. The role of comorbidity in explaining racial/ethnic disparities in hospitalization and mortality has been investigated in only 1 study, which did not include Hispanic patients. Although poverty, low educational attainment, and residence in areas with high densities of Black and Hispanic populations are associated with higher hospitalizations and COVID-19–related deaths in NYC, the effect of neighborhood socioeconomic status on likelihood of hospitalization, severity of illness, and death is unknown. COVID-19–related outcomes in Asian patients have also been incompletely explored.

The analysis, interestingly, found that Black and Hispanic patients in New York City, once hospitalized, were less likely to die than White patients were. Lots of complicated issues here, but some combination of exposure through conditions of work, transportation, and residence; existing health conditions; and access to and quality of care. My question is more basic, though: What are the age-specific mortality rates by race/ethnicity?

Start tangent on why age-specific comparisons are important. In demography, breaking things down by age is a basic first-pass statistical control. Age isn’t inherently the most important variable, but (1) so many things are so strongly affected by age, (2) so many groups differ greatly in their age compositions, and (3) age is so straightforward to measure, that it’s often the most reasonable first cut when comparison groups. Very frequently we find that a simple comparison is reversed when age is controlled. Consider a classic example: mortality in a richer country (USA) versus a poorer country (Jordan). People in the USA live four years longer, on average, but Americans are more than twice as likely to die each year (9 per 1,000 versus 4 per 1000). The difference is age: 23% of Americans are over age 60, compared with 6% of Jordanians. More old people means more total deaths, but compare within age groups and Americans are less likely to die. A simple separation by age facilitates more meaningful comparison for most purposes. So that’s how I want to compare COVID-19 mortality across race/ethnic groups in the USA. End tangent.

Age-specific mortality rates

It seems like this should be easier, but I can’t find anyone who is publishing them on an ongoing basis. The Centers for Disease Control posts a weekly data file of COVID-19 deaths by age and race/ethnicity, but they do not include the population denominators that you need to calculate mortality rates. So, for example, it tells you that as of December 5 there have been 2,937 COVID-19 deaths among non-Hispanic Blacks in the age range 30-49, compared with 2,186 deaths among non-Hispanic Whites of the same age. So, a higher count of Black deaths. But it doesn’t tell you there are 4.3-times as many Whites as Blacks in that category. So a much higher mortality rate.

On a different page, they report the percentage of all deaths in each age range that have occurred in each race/ethnic group, don’t include their percentage in the population. So, for example, 36% of the people ages 30-39 who have died from COVID-19 were Hispanic, and 24% were non-Hispanic White, but that’s not enough information to calculate mortality rates either. I have no reason to think this is nefarious, but it’s clearly not adequate.

So I went to the 2019 American Community Survey (ACS) data distributed by IPUMS.org to get some denominators. These are a little messy for two main reasons. First, ACS is a survey that asks people what their race and ethnicity are, while death counts are based on death certificates, for which the person who has died is not available to ask. So some people will be identified with a different group when they die than they would if they were surveyed. Second, the ACS and other surveys allow people to specify multiple races (in addition to being Hispanic or not), whereas death certificate data generally does not. So if someone who identifies as Black-and-White on a survey dies, how will the death certificate read? (If you’re very interested, here’s a report on the accuracy of death certificates, and here are the “bridges” they use to try to mash up multiple-race and single-race categories.)

My solution to this is make denominators more or less the way race/ethnicity was defined before multiple race identification was allowed. I put all Hispanic people, regardless of race, into the Hispanic group. Then I put people who are White, non-Hispanic, and no other race into the White category. And then for the Black, Asian, and American Indian categories, I include people who were multiple race (and not Hispanic). So, for example, a Black-White non-Hispanic person is counted as Black. A Black-Asian non-Hispanic person is counted as both Black and Asian. Note I did also do the calculations for Native Hawaiian and Other Pacific Islanders, but those numbers are very small so I’m not showing them on the graph; they’re on the spreadsheet. Note also I say “American Indian” to include all those who are “non-Hispanic American Indian or Alaska Native.”

This is admittedly crude, but I suggest that you trust me that it’s probably OK. (Probably OK, that is, especially for Whites, Blacks, and Hispanics. American Indians and Asians have higher rates of multiple-race identification among the living, so I expect there would be more slippage there.)

Anyway, here’s the absolutely egregious result:

This figure allows race/ethnicity comparisons within the five age groups (under 30 isn’t shown). It reveals that the greatest age-specific disparities are actually at the younger ages. In the range 30-49, Blacks are 5.6-times more likely to die, and Hispanics are 6.6-times more likely to die, than non-Hispanic Whites are. In the oldest age group, over 85, where death rates for everyone are highest, the disparities are only 1.5- and 1.4-to-1 respectively.

Whatever the cause of these disparities, this is just the bottom line, which matters. Please note how very high these rates are at old ages. These are deaths per 100,000, which means that over age 85, 1.8% of all African Americans have died of COVID-19 this year (and 1.7% for Hispanics and 1.2% for Whites). That is — I keep trying to find words to convey the power of these numbers — one out of every 56 African Americans over age 85.

Please stay home if you can.

A spreadsheet file with the data, calculations, and figure, is here: https://osf.io/ewrms/.

Santa’s magic, children’s wisdom, and inequality (a timeless holiday classic essay!)

This is a preprint version of an essay in Enduring Bonds: Inequality, Marriage, Parenting, and Everything Else That Makes Families Great and Terrible, by Philip N. Cohen. Oakland, California: University of California Press. It is revised from previous essays about Santa. Read this one instead.

Eric Kaplan, channeling Francis Pharcellus Church, writes in favor of Santa Claus in the New York Times. The Church argument, written in 1897, is that (a) you can’t prove there is no Santa, so agnosticism is the strongest possible objection, and (b) Santa enriches our lives and promotes non-rationalized gift-giving, “so we might as well believe in him” (1). It’s a very common argument, identical to one employed against atheists in favor of belief in God, but more charming and whimsical when directed at killjoy Santa-deniers.

All harmless fun and existential comfort-food. But we have two problems that the Santa situation may exacerbate. First is science denial. And second is inequality. So, consider this an attempted joyicide.

Science

From Pew Research comes this Christmas news:

“In total, 65% of U.S. adults believe that all of these aspects of the Christmas story – the virgin birth, the journey of the magi, the angel’s announcement to the shepherds and the manger story – reflect events that actually happened” (2).

On some specific items, the scores were even higher. The poll found 73% of Americans believe that Jesus was born to a virgin mother – a belief even shared by 60% of college graduates. (Among Catholics agreement was 86%, among Evangelical Protestants, 96%.)

So the Santa situation is not an isolated question. We’re talking about a population with a very strong tendency to express literal belief in fantastical accounts. This Christmas story may be the soft leading edge of a more hardcore Christian fundamentalism. For the past 20 years, the General Social Survey (GSS) has found that a third of American adults agrees with the statement, “The Bible is the actual word of God and is to be taken literally, word for word,” versus two other options: “The Bible is the inspired word of God but not everything in it should be taken literally, word for word”; and, “The Bible is an ancient book of fables, legends, history, and moral precepts recorded by men.” (The “actual word of God” people are less numerous than the virgin-birth believers, but they’re related.)

Using the GSS, I analyzed people’s social attitudes according to their view of the Bible for the years 2010-2014 (see Figure 9). Controlling for their sex, age, race, education, and the year of the survey, those with more literal interpretations of the Bible are much more likely than the rest of the population to:

  • Oppose marriage rights for homosexuals
  • Agree that “people worry too much about human progress harming the environment”
  • Agree that “It is much better for everyone involved if the man is the achiever outside the home and the woman takes care of the home and family”

In addition, among non-Hispanic Whites, the literal-Bible people are more likely to rank Blacks as more lazy than hardworking, and to believe that Blacks “just don’t have the motivation or willpower to pull themselves up out of poverty” (3).

This isn’t the direction I’d like to push our culture. Of course, teaching children to believe in Santa doesn’t necessarily create “actual word of God” fundamentalists – but there’s some relationship there.

Children’s ways of knowing

Margaret Mead in 1932 reported on the notion that young children not only know less, but know differently, than adults, in a way that parallels the evolution of society over time. Children were thought to be “more closely related to the thought of the savage than to the thought of the civilized man,” with animism in “primitive” societies being similar to the spontaneous thought of young children. This goes along with the idea that believing in Santa is indicative of a state of innocence (4). In pursuit of empirical confirmation of the universality of childhood, Mead investigated the Manus tribe in Melanesia, who were pagans, looking for magical thinking in children: “animistic premise, anthropomorphic interpretation and faulty logic.”

Instead, she found “no evidence of spontaneous animistic thought in the uncontrolled sayings or games” over five months of continuous observation of a few dozen children. And while adults in the community attributed mysterious or random events to spirits and ghosts, children never did:

“I found no instance of a child’s personalizing a dog or a fish or a bird, of his personalizing the sun, the moon, the wind or stars. I found no evidence of a child’s attributing chance events, such as the drifting away of a canoe, the loss of an object, an unexplained noise, a sudden gust of wind, a strange deep-sea turtle, a falling seed from a tree, etc., to supernaturalistic causes.”

On the other hand, adults blamed spirits for hurricanes hitting the houses of people who behave badly, believed statues can talk, thought lost objects had been stolen by spirits, and said people who are insane are possessed by spirits. The grown men all thought they had personal ghosts looking out for them – with whom they communicated – but the children dismissed the reality of the ghosts that were assigned to them. They didn’t play ghost games.

Does this mean magical thinking is not inherent to childhood? Mead wrote:

“The Manus child is less spontaneously animistic and less traditionally animistic than is the Manus adult [‘traditionally’ here referring to the adoption of ritual superstitious behavior]. This result is a direct contradiction of findings in our own society, in which the child has been found to be more animistic, in both traditional and spontaneous fashions, than are his elders. When such a reversal is found in two contrasting societies, the explanation must be sought in terms of the culture; a purely psychological explanation is inadequate.”

Maybe people have the natural capacity for both animistic and realistic thinking, and societies differ in which trait they nurture and develop through children’s education and socialization. Mead speculated that the pattern she found had to do with the self-sufficiency required of Manus children. A Manus child must…

“…make correct physical adjustments to his environment, so that his entire attention is focused upon cause and effect relationships, the neglect of which would result in immediate disaster. … Manus children are taught the properties of fire and water, taught to estimate distance, to allow for illusion when objects are seen under water, to allow for obstacles and judge possible clearage for canoes, etc., at the age of two or three.”

Plus, perhaps unlike in industrialized society, their simple technology is understandable to children without the invocation of magic. And she observed that parents didn’t tell the children imaginary stories, myths, and legends.

I should note here that I’m not saying we have to choose between religious fundamentalism and a society without art and literature. The question is about believing things that aren’t true, and can’t be true. I’d like to think we can cultivate imagination without launching people down the path of blind credulity.

Modern credulity

For evidence that culture produces credulity, consider the results of a study that showed most four-year-old children understood that Old Testament stories are not factual. Six-year-olds, however, tended to believe the stories were factual, if their impossible events were attributed to God rather than rewritten in secular terms (e.g., “Matthew and the Green Sea” instead of “Moses and the Red Sea”) (5). Why? Belief in supernatural or superstitious things, contrary to what you might assume, requires a higher level of cognitive sophistication than does disbelief, which is why five-year-olds are more likely to believe in fairies than three-year-olds (6). These studies suggest children have to be taught to believe in magic. (Adults use persuasion to do that, but teaching with rewards – like presents under a tree or money under a pillow – is of course more effective.)

Children can know things either from direct observation or experience, or from being taught. So they can know dinosaurs are real if they believe books and teachers and museums, even if they can’t observe them living (true reality detection). And they can know that Santa Claus and imaginary friends are not real if they believe either authorities or their own senses (true baloney detection). Similarly, children also have two kinds of reality-assessment errors: false positive and false negative. Believing in Santa Claus is false positive. Refusing to believe in dinosaurs is false negative. In Figure 10, which I adapted from a paper by Jacqueline Woolley and Maliki Ghossainy true judgment is in regular type, errors are in italics (7).

We know a lot about kids’ credulity (Santa Claus, tooth fairy, etc.). But, Woolley and Ghossainy write, their skepticism has been neglected:

“Development regarding beliefs about reality involves, in addition to decreased reliance on knowledge and experience, increased awareness of one’s own knowledge and its limitations for assessing reality status. This realization that one’s own knowledge is limited gradually inspires a waning reliance on it alone for making reality status decisions and a concomitant increase in the use of a wider range of strategies for assessing reality status, including, for example, seeking more information, assessing contextual cues, and evaluating the quality of the new information” (8).

The “realization that one’s own knowledge is limited” is a vital development, ultimately necessary for being able to tell fact from fiction. But, sadly, it need not lead to real understanding – under some conditions, such as, apparently, the USA today, it often leads instead to reliance on misguided or dishonest authorities who compete with science to fill the void beyond what we can directly observe or deduce. Believing in Santa because we can’t disprove his existence is a developmental dead end, a backward-looking reliance on authority for determining truth. But so is failure to believe in vaccines or evolution or climate change just because we can’t see them working.

We have to learn how to avoid the italics boxes without giving up our love for things imaginary, and that seems impossible without education in both science and art.

Rationalizing gifts

What is the essence of Santa, anyway? In Kaplan’s New York Times essay it’s all about non-rationalized giving, for the sake of giving. The latest craze in Santa culture, however, says otherwise: Elf on the Shelf, which exploded on the Christmas scene after 2008, selling in the millions. In case you’ve missed it, the idea is to put a cute little elf somewhere on a shelf in the house. You tell your kids it’s watching them, and that every night it goes back to the North Pole to report to Santa on their nice/naughty ratio. While the kids are sleeping, you move it to another shelf in house, and the kids delight in finding it again each morning.

In other words, it’s the latest in Michel Foucault’s panopticon development (9). Consider the Elf on a Shelf aftermarket accessories, like the handy warning labels, which threaten children with “no toys” if they aren’t on their “best behavior” from now on. So is this non-rationalized gift giving? Quite the opposite. In fact, rather than cultivating a whimsical love of magic, this is closer to a dystopian fantasy in which the conjured enforcers of arbitrary moral codes leap out of their fictional realm to impose harsh consequences in the real life of innocent children.

Inequality

My developmental question regarding inequality is this: What is the relationship between belief in Santa and social class awareness over the early life course? How long after kids realize there is class inequality do they go on believing in Santa? This is where rationalization meets fantasy. Beyond worrying about how Santa rewards or punishes them individually, if children are to believe that Christmas gifts are doled out according to moral merit, than what are they to make of the obvious fact that rich kids get more than poor kids? Rich or poor, the message seems the same: children deserve what they get.

I can’t demonstrate that believing in Santa causes children to believe that economic inequality is justified by character differences between social classes. Or that Santa belief undermines future openness to science and logic. But those are hypotheses. Between the anti-science epidemic and the pervasive assumption that poor people deserve what they get, this whole Santa enterprise seems risky. Would it be so bad, so destructive to the wonder that is childhood, if instead of attributing gifts to supernatural beings we instead told children that we just buy them gifts because we love them unconditionally and want them — and all other children — to be happy?


Notes:

1. Kaplan, Eric. 2014. “Should We Believe in Santa Claus?” New York Times Opinionator, December 20.

2. Pew Research Center. 2014. “Most Say Religious Holiday Displays on Public Property Are OK.” Religion & Public Life Project, December 15.

3. The GSS asked if “people in the group [African Americans] tend to be hard-working or if they tend to be lazy,” on a scale from 1 (hardworking) to 7 (lazy). I coded them as favoring lazy if they gave scores of 5 or above. The motivation question was a yes-or-no question: “On the average African-Americans have worse jobs, income, and housing than white people. Do you think these differences are because most African-Americans just don’t have the motivation or willpower to pull themselves up out of poverty?”

4. Mead, Margaret. 1932. “An Investigation of the Thought of Primitive Children, with Special Reference to Animism.” Journal of the Royal Anthropological Institute of Great Britain and Ireland 62: 173–90.

5. Vaden, Victoria Cox, and Jacqueline D. Woolley. 2011. “Does God Make It Real? Children’s Belief in Religious Stories from the Judeo-Christian Tradition.” Child Development 82 (4): 1120–35.

6. Woolley, Jacqueline D., Elizabeth A. Boerger, and Arthur B. Markman. 2004. “A Visit from the Candy Witch: Factors Influencing Young Children’s Belief in a Novel Fantastical Being.” Developmental Science 7 (4): 456–68.

7. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

8. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

9. Pinto, Laura. 2016. “Elf et Michelf.” YouTube. https://www.youtube.com/watch?v=s9Pn16dCWIg.