Host, parasite, and failure at the colony level: COVID-19 and the US information ecosystem

Trump campaign attempts to remove satirical cartoon from online retailer | Comics and graphic novels | The Guardian

This cartoon is offensive. And yet.


A few months ago I did some reading about viruses and other parasites, inspired by the obvious, but also those ants that get commandeered by cordyceps fungi, as seen in this awesome Richard Attenborough video:

Besides the incredible feat of programming ants to disseminate fungus spores, the video reveals two other astounding facts about this system. First, worker ants from afflicted colonies selflessly identify and remove infected ants and dump their bodies far away, reflecting intergenerational genetic training as well as the ability to gather and process the information necessary to make the diagnosis and act on it. And second, there are many, many cordyceps species, each evolved to prey upon only one species, reflecting a pattern of co-evolution between host and parasite.

This led me to reading about colony defenses in general, including not just ants but things like wasps and termites that leave chemical protection for future generations, and bees getting together to make hive fevers to ward off parasitic infections. I don’t find a video of exactly a hive fever, but this one is similar: It’s bees using their collective body temperature to cook a predatory hornet to death:

Incredible. That got me thinking about how information management and dissemination is vital to colony-level defenses against parasites. They need to process and transmit information to work together in the arms race against parasites (especially viruses) that usually evolve much more rapidly than they do.

And you may know where this is going: How the US failed against SARS-CoV-2. In an information arms-race, life and death struggle against a parasitic virus that mutates exponentially faster than we can react — who knows how many experimental trials it took to design SARS-CoV-2? — this kind of efficient information system is what we need. And it worked in some ways, as humanity identified the virus and shared the data and code necessary to take action against it. But clearly we failed in other ways — communicating with our fellow citizens, dislodging the disinformation and misinformation that clouded their understanding and led so many to sacrifice themselves at the behest of a corrupt political organization and its demented leader.

Is this social evolution, I asked (despairingly), in which the Chinese system of government proves its superiority for survival at the colony level, while the US democratic system chokes on its own infected lungs. Worse, is the virus programming us to exacerbate our own weaknesses — yanking our social media chains and our slavery-era political institutions, like the rabies virus, which infects the brain and then explodes out through the salivary glands of a zombified attack animal. Colonies of ants rise or fall based on how they respond to parasites, which themselves are evolving to control ant behavior, as they evolve together. How exceptional are humans? Maybe we just do it faster, in social evolutionary time, rather than across many generations of breeding. Fascinating, but kind of dark. lol.

Anyway, naturally my concern is with information systems and scholarly communication. How human success against the virus has come from the rapid generation and dissemination of science and public health information (including preprints and data sharing). And failure came from disinformation and information corruption. Dr. Birx in the role the rabid raccoon, watching herself lose her grip on scientific reality as the authoritarian leader douses the public health information system with bleach and sets it on fire with an ultraviolet ray gun “inside the body.”

So I wrote a short paper titled, “Host, parasite, and failure at the colony level: COVID-19 and the US information ecosystem,” and posted it on SocArXiv: socarxiv.org/4hgam.* It includes this table:

hpit2


* I barely took high school biology. In college I took “Climate and Man,” and “Biology of Human Affairs.” That’s pretty much it for my life sciences training, so don’t take my word for it. Comments welcome.

Santa’s magic, children’s wisdom, and inequality (a timeless holiday classic essay!)

This is a preprint version of an essay in Enduring Bonds: Inequality, Marriage, Parenting, and Everything Else That Makes Families Great and Terrible, by Philip N. Cohen. Oakland, California: University of California Press. It is revised from previous essays about Santa. Read this one instead.

Eric Kaplan, channeling Francis Pharcellus Church, writes in favor of Santa Claus in the New York Times. The Church argument, written in 1897, is that (a) you can’t prove there is no Santa, so agnosticism is the strongest possible objection, and (b) Santa enriches our lives and promotes non-rationalized gift-giving, “so we might as well believe in him” (1). It’s a very common argument, identical to one employed against atheists in favor of belief in God, but more charming and whimsical when directed at killjoy Santa-deniers.

All harmless fun and existential comfort-food. But we have two problems that the Santa situation may exacerbate. First is science denial. And second is inequality. So, consider this an attempted joyicide.

Science

From Pew Research comes this Christmas news:

“In total, 65% of U.S. adults believe that all of these aspects of the Christmas story – the virgin birth, the journey of the magi, the angel’s announcement to the shepherds and the manger story – reflect events that actually happened” (2).

On some specific items, the scores were even higher. The poll found 73% of Americans believe that Jesus was born to a virgin mother – a belief even shared by 60% of college graduates. (Among Catholics agreement was 86%, among Evangelical Protestants, 96%.)

So the Santa situation is not an isolated question. We’re talking about a population with a very strong tendency to express literal belief in fantastical accounts. This Christmas story may be the soft leading edge of a more hardcore Christian fundamentalism. For the past 20 years, the General Social Survey (GSS) has found that a third of American adults agrees with the statement, “The Bible is the actual word of God and is to be taken literally, word for word,” versus two other options: “The Bible is the inspired word of God but not everything in it should be taken literally, word for word”; and, “The Bible is an ancient book of fables, legends, history, and moral precepts recorded by men.” (The “actual word of God” people are less numerous than the virgin-birth believers, but they’re related.)

Using the GSS, I analyzed people’s social attitudes according to their view of the Bible for the years 2010-2014 (see Figure 9). Controlling for their sex, age, race, education, and the year of the survey, those with more literal interpretations of the Bible are much more likely than the rest of the population to:

  • Oppose marriage rights for homosexuals
  • Agree that “people worry too much about human progress harming the environment”
  • Agree that “It is much better for everyone involved if the man is the achiever outside the home and the woman takes care of the home and family”

In addition, among non-Hispanic Whites, the literal-Bible people are more likely to rank Blacks as more lazy than hardworking, and to believe that Blacks “just don’t have the motivation or willpower to pull themselves up out of poverty” (3).

This isn’t the direction I’d like to push our culture. Of course, teaching children to believe in Santa doesn’t necessarily create “actual word of God” fundamentalists – but there’s some relationship there.

Children’s ways of knowing

Margaret Mead in 1932 reported on the notion that young children not only know less, but know differently, than adults, in a way that parallels the evolution of society over time. Children were thought to be “more closely related to the thought of the savage than to the thought of the civilized man,” with animism in “primitive” societies being similar to the spontaneous thought of young children. This goes along with the idea that believing in Santa is indicative of a state of innocence (4). In pursuit of empirical confirmation of the universality of childhood, Mead investigated the Manus tribe in Melanesia, who were pagans, looking for magical thinking in children: “animistic premise, anthropomorphic interpretation and faulty logic.”

Instead, she found “no evidence of spontaneous animistic thought in the uncontrolled sayings or games” over five months of continuous observation of a few dozen children. And while adults in the community attributed mysterious or random events to spirits and ghosts, children never did:

“I found no instance of a child’s personalizing a dog or a fish or a bird, of his personalizing the sun, the moon, the wind or stars. I found no evidence of a child’s attributing chance events, such as the drifting away of a canoe, the loss of an object, an unexplained noise, a sudden gust of wind, a strange deep-sea turtle, a falling seed from a tree, etc., to supernaturalistic causes.”

On the other hand, adults blamed spirits for hurricanes hitting the houses of people who behave badly, believed statues can talk, thought lost objects had been stolen by spirits, and said people who are insane are possessed by spirits. The grown men all thought they had personal ghosts looking out for them – with whom they communicated – but the children dismissed the reality of the ghosts that were assigned to them. They didn’t play ghost games.

Does this mean magical thinking is not inherent to childhood? Mead wrote:

“The Manus child is less spontaneously animistic and less traditionally animistic than is the Manus adult [‘traditionally’ here referring to the adoption of ritual superstitious behavior]. This result is a direct contradiction of findings in our own society, in which the child has been found to be more animistic, in both traditional and spontaneous fashions, than are his elders. When such a reversal is found in two contrasting societies, the explanation must be sought in terms of the culture; a purely psychological explanation is inadequate.”

Maybe people have the natural capacity for both animistic and realistic thinking, and societies differ in which trait they nurture and develop through children’s education and socialization. Mead speculated that the pattern she found had to do with the self-sufficiency required of Manus children. A Manus child must…

“…make correct physical adjustments to his environment, so that his entire attention is focused upon cause and effect relationships, the neglect of which would result in immediate disaster. … Manus children are taught the properties of fire and water, taught to estimate distance, to allow for illusion when objects are seen under water, to allow for obstacles and judge possible clearage for canoes, etc., at the age of two or three.”

Plus, perhaps unlike in industrialized society, their simple technology is understandable to children without the invocation of magic. And she observed that parents didn’t tell the children imaginary stories, myths, and legends.

I should note here that I’m not saying we have to choose between religious fundamentalism and a society without art and literature. The question is about believing things that aren’t true, and can’t be true. I’d like to think we can cultivate imagination without launching people down the path of blind credulity.

Modern credulity

For evidence that culture produces credulity, consider the results of a study that showed most four-year-old children understood that Old Testament stories are not factual. Six-year-olds, however, tended to believe the stories were factual, if their impossible events were attributed to God rather than rewritten in secular terms (e.g., “Matthew and the Green Sea” instead of “Moses and the Red Sea”) (5). Why? Belief in supernatural or superstitious things, contrary to what you might assume, requires a higher level of cognitive sophistication than does disbelief, which is why five-year-olds are more likely to believe in fairies than three-year-olds (6). These studies suggest children have to be taught to believe in magic. (Adults use persuasion to do that, but teaching with rewards – like presents under a tree or money under a pillow – is of course more effective.)

Children can know things either from direct observation or experience, or from being taught. So they can know dinosaurs are real if they believe books and teachers and museums, even if they can’t observe them living (true reality detection). And they can know that Santa Claus and imaginary friends are not real if they believe either authorities or their own senses (true baloney detection). Similarly, children also have two kinds of reality-assessment errors: false positive and false negative. Believing in Santa Claus is false positive. Refusing to believe in dinosaurs is false negative. In Figure 10, which I adapted from a paper by Jacqueline Woolley and Maliki Ghossainy true judgment is in regular type, errors are in italics (7).

We know a lot about kids’ credulity (Santa Claus, tooth fairy, etc.). But, Woolley and Ghossainy write, their skepticism has been neglected:

“Development regarding beliefs about reality involves, in addition to decreased reliance on knowledge and experience, increased awareness of one’s own knowledge and its limitations for assessing reality status. This realization that one’s own knowledge is limited gradually inspires a waning reliance on it alone for making reality status decisions and a concomitant increase in the use of a wider range of strategies for assessing reality status, including, for example, seeking more information, assessing contextual cues, and evaluating the quality of the new information” (8).

The “realization that one’s own knowledge is limited” is a vital development, ultimately necessary for being able to tell fact from fiction. But, sadly, it need not lead to real understanding – under some conditions, such as, apparently, the USA today, it often leads instead to reliance on misguided or dishonest authorities who compete with science to fill the void beyond what we can directly observe or deduce. Believing in Santa because we can’t disprove his existence is a developmental dead end, a backward-looking reliance on authority for determining truth. But so is failure to believe in vaccines or evolution or climate change just because we can’t see them working.

We have to learn how to avoid the italics boxes without giving up our love for things imaginary, and that seems impossible without education in both science and art.

Rationalizing gifts

What is the essence of Santa, anyway? In Kaplan’s New York Times essay it’s all about non-rationalized giving, for the sake of giving. The latest craze in Santa culture, however, says otherwise: Elf on the Shelf, which exploded on the Christmas scene after 2008, selling in the millions. In case you’ve missed it, the idea is to put a cute little elf somewhere on a shelf in the house. You tell your kids it’s watching them, and that every night it goes back to the North Pole to report to Santa on their nice/naughty ratio. While the kids are sleeping, you move it to another shelf in house, and the kids delight in finding it again each morning.

In other words, it’s the latest in Michel Foucault’s panopticon development (9). Consider the Elf on a Shelf aftermarket accessories, like the handy warning labels, which threaten children with “no toys” if they aren’t on their “best behavior” from now on. So is this non-rationalized gift giving? Quite the opposite. In fact, rather than cultivating a whimsical love of magic, this is closer to a dystopian fantasy in which the conjured enforcers of arbitrary moral codes leap out of their fictional realm to impose harsh consequences in the real life of innocent children.

Inequality

My developmental question regarding inequality is this: What is the relationship between belief in Santa and social class awareness over the early life course? How long after kids realize there is class inequality do they go on believing in Santa? This is where rationalization meets fantasy. Beyond worrying about how Santa rewards or punishes them individually, if children are to believe that Christmas gifts are doled out according to moral merit, than what are they to make of the obvious fact that rich kids get more than poor kids? Rich or poor, the message seems the same: children deserve what they get.

I can’t demonstrate that believing in Santa causes children to believe that economic inequality is justified by character differences between social classes. Or that Santa belief undermines future openness to science and logic. But those are hypotheses. Between the anti-science epidemic and the pervasive assumption that poor people deserve what they get, this whole Santa enterprise seems risky. Would it be so bad, so destructive to the wonder that is childhood, if instead of attributing gifts to supernatural beings we instead told children that we just buy them gifts because we love them unconditionally and want them — and all other children — to be happy?


Notes:

1. Kaplan, Eric. 2014. “Should We Believe in Santa Claus?” New York Times Opinionator, December 20.

2. Pew Research Center. 2014. “Most Say Religious Holiday Displays on Public Property Are OK.” Religion & Public Life Project, December 15.

3. The GSS asked if “people in the group [African Americans] tend to be hard-working or if they tend to be lazy,” on a scale from 1 (hardworking) to 7 (lazy). I coded them as favoring lazy if they gave scores of 5 or above. The motivation question was a yes-or-no question: “On the average African-Americans have worse jobs, income, and housing than white people. Do you think these differences are because most African-Americans just don’t have the motivation or willpower to pull themselves up out of poverty?”

4. Mead, Margaret. 1932. “An Investigation of the Thought of Primitive Children, with Special Reference to Animism.” Journal of the Royal Anthropological Institute of Great Britain and Ireland 62: 173–90.

5. Vaden, Victoria Cox, and Jacqueline D. Woolley. 2011. “Does God Make It Real? Children’s Belief in Religious Stories from the Judeo-Christian Tradition.” Child Development 82 (4): 1120–35.

6. Woolley, Jacqueline D., Elizabeth A. Boerger, and Arthur B. Markman. 2004. “A Visit from the Candy Witch: Factors Influencing Young Children’s Belief in a Novel Fantastical Being.” Developmental Science 7 (4): 456–68.

7. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

8. Woolley, Jacqueline D., and Maliki Ghossainy. 2013. “Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics.” Child Development 84 (5): 1496–1510.

9. Pinto, Laura. 2016. “Elf et Michelf.” YouTube. https://www.youtube.com/watch?v=s9Pn16dCWIg.

Sociologist, scientist? Toward transparency, accountability, and a sharing culture

With the help of the designer Brigid Barrett, I have a new website at philipncohen.com, and a redesigned blog to match (which you’re looking at now). We decided on the tagline, “Sociologist / Demographer” for the homepage photo. It’s true I am those two things, but I also like how they modify each other, a type of sociologist and a type of demographer. First some reflections, then a little data.

I shared the website on Twitter, and wrote this in a thread:

Having “sociologist” attached to your name is not going to signal scientific rigor to the public in the way that other discipline labels might (like, I think, “demographer”). A lot of sociologists, as shown by their behavior, are fine with that. Your individual behavior as a researcher can shape the impression you make, but it will not change the way the discipline is seen. Until the discipline — especially our associations but also our departments — adopts (and communicates) scientific practices, that’s how it will be. As an association, ASA has shown little interest in this, and seems unlikely to soon.

A substantial portion of sociologists rejects the norms of science. Others are afraid that adopting them will make their work “less than” within the discipline’s hierarchy. For those of us concerned about this, the practices of science are crucial: openness, transparency, reproducibility. We need to find ways at the sub-discipline level to adopt and communicate these values and build trust in our work. Building that trust may require getting certain publics to see beyond the word “sociologist,” rather than just see value in it. They will see our open practices, our shared data and code, our ability to admit mistakes, embrace uncertainty, and entertain alternative explanations.

There are other sources of trust. For example, taking positions on social issues or politics is also a way of building trust with like-minded audiences. These are important for some sociologists, and truly valuable, but they’re different from science. Maybe unreasonably, I want both. I want some people to give my work a hearing because I take antiracist or feminist positions in my public work, for example. And also because I practice science in my research, with the vulnerability and accountability that implies. Some people would say my public political pronouncements undermine not just my science, but the reputation of the discipline as a whole. I can’t prove they’re wrong. But I think the roles of citizen and scholar are ultimately compatible. Having a home in a discipline that embraced science and better communicated its value would help. A scientific brand, seal of approval, badges, etc., would help prevent my outspokenness from undermining my scientific reputation.

One reply I got, confirming my perception, was, “this pretence of natural science needs to be resisted not indulged.” Another wrote: “As a sociologist and an ethnographer ‘reproducibility’ will always be a very weak and mostly inapplicable criterion for my research. I’m not here to perform ‘science’ so the public will accept my work, I’m here to seek truth.” Lots of interesting responses. Several people shared this old review essay arguing sociology should be more like biology than like physics, in terms of epistemology. The phrase “runaway solipsism” was used.

I intended my tweets to focus on the open “science practices” which which I have been centrally concerned, centered on scholarly communication: openness, transparency, replicability. That is, I am less interested in the epistemological questions of what is meaning and truth, and solipsism, and more concerned with basic questions like, “How do we know researchers are doing good research, or even telling the truth?” And, “How can we improve our work so that it’s more conducive to advancing research overall?”

Whether or not sociology is science, we should have transparency, accountability, and a sharing culture in our work. This makes our work better, and also maybe increases our legitimacy in public.

Where is ASA?

To that end, as an elected member of the American Sociological Association Committee on Publications, two years ago I proposed that the association adopt the Transparency and Openness Promotion Guidelines from the Center for Open Science, and to start using their Open Science Badges, which recognize authors who provide open data, open materials, or use preregistration for their studies. It didn’t go over well. Some people are very concerned that rewarding openness with little badges in the table of contents, which presumably would go mostly to quantitative researchers, would be seen as penalizing qualitative researchers who can’t share their data, thus creating a hierarchy in the discipline.

So at the January 2019 meeting the committee killed that proposal so an “ad hoc committee could be established to evaluate the broader issues related to open data for ASA journals.” Eight months later, after an ad hoc committee report, the publications committee voted to “form an ad hoc committee [a different one this time] to create a statement regarding conditions for sharing data and research materials in a context of ethical and inclusive production of knowledge,” and to, “review the question about sharing data currently asked of all authors submitting manuscripts to incorporate some of the key points of the Committee on Publications discussion.” The following January (2020), the main committee was informed that the ad hoc committee had been formed, but hadn’t had time to do its work. Eight months later, the new ad hoc committee proposed a policy: ask authors who publish in ASA journals to declare whether their data and research materials are publicly available, and if not why not, with the answers to be appended in a footnote to each article. The minutes aren’t published yet, but I seem to remember us approving the proposal (minutes should appear in the spring, 2021). So, after two years, all articles are going to report whether or not materials are available. Someday. Not bad, for ASA!

To see how we’re doing in the meantime, and inspired by the Twitter exchange, I flipped through the last four issues of American Sociological Review, the flagship journal of the association, to assess the status of data and materials sharing. That is, 24 articles published in 2020. The papers and what I found are listed in the table below.

There were six qualitative papers and three mixed qualitative/quantitative papers. None of these provided access to research materials such as analysis code, interview guides, survey instruments, or transcripts — or provided an explanation for why these materials were not available. Among the 15 quantitative papers, four provided links to replication packages, with the code required to replicate the analyses in the papers. Some of these used publicly available data, or included the data in the package, while the others would require additional steps to gain access to the data. The other 11 provided neither data nor code or other materials.

That’s just from flipping through the papers, searching for “data,” “code,” “available,” reading the acknowledgments and footnotes, and so on. So I may have missed something. (One issue, which maybe the new policy will improve, is that there is no standard place on the website or in the paper for such information to be conveyed.) Many of the papers include a link on the ASR website to “Supplemental Material,” but in all cases this was just a PDF with extra results or description of methods, and did not include computer code or data. The four papers that had replication packages all linked to external sites, such as Github or Dataverse, which are great but are not within the journal’s control, so the journal can’t ensure they are correct, or that they are maintained over time. Still, those are great.

I’m not singling out papers (which, by the way, seem excellent and very interesting — good journal!), just pointing out the pattern. Let’s just say that any of these authors could have provided at least some research materials in support of the paper, if they had been personally, normatively, or formally compelled to do so.

Why does that matter?

First, providing things like interview guides, coding schemes, or statistical code, is helpful to the next researcher who comes along. It makes the article more useful in the cumulative research enterprise. Second, it helps readers identify possible errors or alternative ways of doing the analysis, which would be useful both to the original authors and to subsequent researchers who want to take up the baton or do similar work. Third, research materials can help people determine if maybe, just maybe, and very rarely, the author is actually just bullshitting. I mean literally, what do we have besides your word as a researcher that anything you’re saying is true? Fourth, the existence of such materials, and the authors’ willingness to provide them, signals to all readers a higher level of accountability, a willingness to be questioned — as well as a commitment to the collective effort of the research community as a whole. And, because it’s such an important journal, that signal might boost the reputation for reliability and trustworthiness of the field overall.

There are vast resources, and voluminous debates, about what should be shared in the research process, by whom, for whom, and when — and I’m not going to litigate it all here. But there is a growing recognition in (almost) all quarters that simply providing the “final” text of a “publication” is no longer the state of the art in scholarly communication, outside of some very literary genres of scholarship. Sociology is really very far behind other social science disciplines on this. And, partly because of our disciplinary proximity to the scholars who raise objections like those I mentioned above, even those of us who do the kind of work where openness is most normative (like the papers below that included replication packages), can’t move forward with disciplinary policies to improve the situation. ASR is paradigmatic: several communities share this flagship journal, the policies of which are serving some more than others.

What policies should ASA and its journals adopt to be less behind? Here are a few: Adopt TOP badges, like the American Psychological Association has; have their journals actually check the replication code to see that it produces the claimed results, like the American Economic Association does; publish registered reports (peer review before results known), like all experimental sciences are doing; post peer review reports, like Nature journals, PLOS, and many others do. Just a few ideas.

Change is hard. Even if we could agree on the direction of change. Brian Nosek, director of the Center for Open Science (COS), likes to share this pyramid, which illustrates their “strategy for culture and behavior change” toward transparency and reproducibility. The technology has improved so that the lowest two levels of the pyramid are pretty well taken care of. For example, you can easily put research materials on COS’s Open Science Framework (with versioning, linking to various cloud services, and collaboration tools), post your preprint on SocArXiv (which I direct), and share them with the world in a few moments, for free. Other services are similar. The next levels are harder, and that’s where we in sociology are currently stuck.

COS_Culture_and_Behavior_Change_model.width-500 1

For some how-to reading, consider, Transparent and Reproducible Social Science Research: How to Do Open Science, by Garret Christensen, Jeremy Freese, and Edward Miguel (or this Annual Review piece on replication specifically). For an introduction to Scholarly Communication in Sociology, try my report with that title. Please feel free to post other suggestions in the comments.


Four 2020 issues of American Sociological Review

ReferenceQuant/QualData typeData available?Code available?Note
Faber, Jacob W. 2020. “We Built This: Consequences of New Deal Era Intervention in America’s Racial Geography.” American Sociological Review 85 (5): 739–75.QuantCensus+NoNo
Brown, Hana E. 2020. “Who Is an Indian Child? Institutional Context, Tribal Sovereignty, and Race-Making in Fragmented States.” American Sociological Review 85 (5): 776–805. QualArchivalNoNo
Daminger, Allison. 2020. “De-Gendered Processes, Gendered Outcomes: How Egalitarian Couples Make Sense of Non-Egalitarian Household Practices.” American Sociological Review 85 (5): 806–29. QualInterviewsNoNo
Mazrekaj, Deni, Kristof De Witte, and Sofie Cabus. 2020. “School Outcomes of Children Raised by Same-Sex Parents: Evidence from Administrative Panel Data.” American Sociological Review 85 (5): 830–56. QuantAdministrativeNoUpon requestInfo on how to obtain data provided.
Becker, Sascha O., Yuan Hsiao, Steven Pfaff, and Jared Rubin. 2020. “Multiplex Network Ties and the Spatial Diffusion of Radical Innovations: Martin Luther’s Leadership in the Early Reformation.” American Sociological Review 85 (5): 857–94. QuantNetworkNoNoSays data is in the ASR online supplement but it’s not.
Smith, Chris M. 2020. “Exogenous Shocks, the Criminal Elite, and Increasing Gender Inequality in Chicago Organized Crime.” American Sociological Review 85 (5): 895–923. QuantNetworkNoNoCode described.
Storer, Adam, Daniel Schneider, and Kristen Harknett. 2020. “What Explains Racial/Ethnic Inequality in Job Quality in the Service Sector?” American Sociological Review 85 (4): 537–72. QuantSurveyNoNo
Ranganathan, Aruna, and Alan Benson. 2020. “A Numbers Game: Quantification of Work, Auto-Gamification, and Worker Productivity.” American Sociological Review 85 (4): 573–609. MixedMixedNoNo
Fong, Kelley. 2020. “Getting Eyes in the Home: Child Protective Services Investigations and State Surveillance of Family Life.” American Sociological Review 85 (4): 610–38. QualMixedNoNo
Musick, Kelly, Megan Doherty Bea, and Pilar Gonalons-Pons. 2020. “His and Her Earnings Following Parenthood in the United States, Germany, and the United Kingdom.” American Sociological Review 85 (4): 639–74. QuantSurveyYesYesOffsite replication package.
Burdick-Will, Julia, Jeffrey A. Grigg, Kiara Millay Nerenberg, and Faith Connolly. 2020. “Socially-Structured Mobility Networks and School Segregation Dynamics: The Role of Emergent Consideration Sets.” American Sociological Review 85 (4): 675–708. QuantAdministrativeNoNo
Schaefer, David R., and Derek A. Kreager. 2020. “New on the Block: Analyzing Network Selection Trajectories in a Prison Treatment Program.” American Sociological Review 85 (4): 709–37. QuantNetworkNoNo
Choi, Seongsoo, Inkwan Chung, and Richard Breen. 2020. “How Marriage Matters for the Intergenerational Mobility of Family Income: Heterogeneity by Gender, Life Course, and Birth Cohort.” American Sociological Review 85 (3): 353–80. QuantSurveyNoNo
Hook, Jennifer L., and Eunjeong Paek. 2020. “National Family Policies and Mothers’ Employment: How Earnings Inequality Shapes Policy Effects across and within Countries ,  National Family Policies and Mothers’ Employment: How Earnings Inequality Shapes Policy Effects across and within Countries.” American Sociological Review 85 (3): 381–416. QuantSurvey+YesYesOffsite replication package.
Doering, Laura B., and Kristen McNeill. 2020. “Elaborating on the Abstract: Group Meaning-Making in a Colombian Microsavings Program.” American Sociological Review 85 (3): 417–50. MixedSurvey+NoNo
Decoteau, Claire Laurier, and Meghan Daniel. 2020. “Scientific Hegemony and the Field of Autism.” American Sociological Review 85 (3): 451–76. QualArchivalNoNo“Information on the coding schema is available upon request.”
Kiley, Kevin, and Stephen Vaisey. 2020. “Measuring Stability and Change in Personal Culture Using Panel Data.” American Sociological Review 85 (3): 477–506. QuantSurveyYesYesOffsite replication package.
DellaPosta, Daniel. 2020. “Pluralistic Collapse: The ‘Oil Spill’ Model of Mass Opinion Polarization.” American Sociological Review 85 (3): 507–36. QuantSurveyYesYesOffsite replication package.
Simmons, Michaela Christy. 2020. “Becoming Wards of the State: Race, Crime, and Childhood in the Struggle for Foster Care Integration, 1920s to 1960s.” American Sociological Review 85 (2): 199–222. QualArchivalNoNo
Calarco, Jessica McCrory. 2020. “Avoiding Us versus Them: How Schools’ Dependence on Privileged ‘Helicopter’ Parents Influences Enforcement of Rules.” American Sociological Review 85 (2): 223–46. QualEthnography w/ surveyNoNo
Brewer, Alexandra, Melissa Osborne, Anna S. Mueller, Daniel M. O’Connor, Arjun Dayal, and Vineet M. Arora. 2020. “Who Gets the Benefit of the Doubt? Performance Evaluations, Medical Errors, and the Production of Gender Inequality in Emergency Medical Education.” American Sociological Review 85 (2): 247–70. MixedAdministrativeNoNo
Kristal, Tali, Yinon Cohen, and Edo Navot. 2020. “Workplace Compensation Practices and the Rise in Benefit Inequality ,  Workplace Compensation Practices and the Rise in Benefit Inequality.” American Sociological Review 85 (2): 271–97.QuantAdministrativeNoNo
Abascal, Maria. 2020. “Contraction as a Response to Group Threat: Demographic Decline and Whites’ Classification of People Who Are Ambiguously White.” American Sociological Review 85 (2): 298–322.QuantSurvey experimentNoNoPreanalysis plan registered. Data embargoed.
Friedman, Sam, and Aaron Reeves. 2020. “From Aristocratic to Ordinary: Shifting Modes of Elite Distinction.” American Sociological Review 85 (2): 323–50.QuantArchivalNoNo

Where preprints fit in, COVID-19 edition

I recorded a 16-minute talk on the scientific process, science communication, and how preprints fit in to the information ecosystem around COVID-19.

It’s called, “How we know: COVID-19, preprints, and the information ecosystem.” The video is on YouTube here, also embedded below, and the slides, with references, are up here.

Happy to have your feedback, in the comments or any other way.

ASA’s letter against the public interest and our values

youdidwhat

Update 1: I submitted a resolution to the ASA Committee on Publications, for consideration at our January meeting. You can read and comment on it here.

Update 2: The Committee on Publications on January 23 voted to approve the following statement: “The ASA Committee on Publications expresses our opposition to the decision by the ASA to sign the December 18, 2019 letter.”

The American Sociological Association has signed a letter that profoundly betrays the public interest and goes against the values that many of us in the scholarly community embrace.

The letter to President Trump, signed by dozes of academic societies, voices opposition to a rumored federal policy change that would require federally funded research be made freely available upon publication, rather than according to the currently mandated 12-month embargo — which ASA similarly, bitterly, opposed in 2012. ASA has not said who made the decision to sign this letter. All I know is that, as a member of the Committee on Publications, I wasn’t consulted or notified. I don’t know what the ASA rules are for issuing such statements in our name, but this one is disgraceful.

The argument is that ASA would not be able to make money selling research generated by federal funding if it were required to be distributed for free. And because ASA would suffer, science and the public interest would suffer. Like when Trump says getting Ukraine to help him win re-election is by definition in the American interest — what helps ASA is what’s good for science.

The letter says:

Currently, free distribution of research findings is subject to a 12-month embargo, enabling American publishers to recover the investment made in curating and assuring the quality of scientific research content. … The current 12-month embargo period provides science and engineering society publishers the financial stability that enables us to support peer review that ensures the quality and integrity of the research enterprise.

That is funny, because in 2012 ASA director Sally Hillsman (since retired) said the 12-month embargo policy “could threaten the ability of scholarly societies, including the ASA, to continue publishing journals” and was “likely to seriously erode and eventually jeopardize our financial ability to perform the critical, value added peer review and editorial functions of scientific publishing.”

The current letter, at least with regard to ASA, tell this whopper: “we support open access and have a strong history of advancing open access through a broad array of operational models.” They literally oppose open access, including in this letter, and including the current, weak, open access policy.

The ASA-signed letter is very similar to one sent about the same time by a different (but overlapping) large group of publishers, including Elsevier, and the U.S. Chamber of Commerce, claiming the rumored policy would hurt ‘merica. But there are subtle differences. The ASA letter refers to “the current proven and successful model for reporting, curating and archiving scientific results and advancing the U.S. research enterprise,” which should not be tampered with. The other letter warns of the danger of “step[ing] into the private marketplace” in which they sell research. Knowledge philosopher Peter Suber offered an excellent critique of the market claims here in this Twitter thread:

ASA and the other money-making societies really want you to believe there is no way to do curation and peer review without them. If we jeopardize their business model, ASA says, the services they provide would not happen. In fact, the current subscription models and paywalls stand in the way of developing the cheaper, more efficient models we could build right now to replace them. All we need to do is take the money we currently devote to journal subscriptions and publisher profits, and redirect it to the tasks of curation and peer review without profits and paywalls — and free distribution (which is a lot cheaper to administer than paywalled distribution).

The sooner we start working on that the better. In this effort — and in the absence of leadership by scholarly societies — the university libraries are our strongest allies. This is explained by UNC Librarian Elaine Westbrooks in this Twitter thread:

Compare this forwarding thinking librarian’s statement with Elsevier. In proudly sharing the publishers’ statement, Elsevier vice president Ann Gabriel said, “Imagine a world without scientific, medical societies and publishers who support scholarship, discovery and infrastructures for peer review, data archiving and networks.” Notice two things in this statement. First, she does not mention libraries, which are the academy-owned institutions that do literally all this as well. And second, see how she bundles publishers and societies. This is the sad reality. If instead of “societies and publishers” we had “societies and libraries” maybe we’d be getting somewhere. Instead, our societies, including the American Sociological Association, are effectively captured by publishers, and represent their interests instead of the public interest, and the values of our community.

I remain very pessimistic about ASA, which is run by a professional group with allegiance to the paywall industry, along with mostly transient, naive, and/or ineffectual academics (of which I am certainly one). But I’m torn, because I want to see a model of scholarly societies that works, which is why I agreed to serve of the ASA Committee on Publications — which mostly does busy work for the association while providing the cover of legitimacy for the professional staff.

Letter of opposition

So I posted a letter expressing opposition to the ASA letter. If you are a sociologist, I hope you will consider sharing and signing it. We got 100 signatures on the first day, but it will probably take more for ASA to care. To share the letter, you can use this link: https://forms.gle/ecvYk3hUmEh2jrETA.

It reads:

In light of a rumored new White House Open Access Policy, the American Sociological Association (ASA), and other scholarly societies, signed a letter to President Trump in support of continued embargoes for federally-funded research.

We are sociologists who join with libraries and other advocates in the research community in support of federal policy to make the results of taxpayer-funded research immediately available to the public for free. We endorse a policy that would eliminate the current 12-month waiting period for open access to the outputs of taxpayer-funded scientific research. Ensuring full open access to publicly-funded research contributes to the public good by improving scientific productivity and equalizing access — including international access — to valuable knowledge that the public has already paid for. The U.S. should join the many other countries that already have strong open access policies.

We oppose the decision by ASA to sign this letter, which goes against our values as members of the research community, and urge the association to rescind its endorsement, to join the growing consensus in favor of open access to to scholarship, including our own.

Science finds tiny things nowadays (Malia edition)

We have to get used to living in a world where science — even social science — can detect really small things. Understanding how important really small things are, and how to interpret them, is harder nowadays than just finding them.

Remember when Hanna Rosin wrote this?

One of the great crime stories of the last twenty years is the dramatic decline of sexual assault. Rates are so low in parts of the country — for white women especially — that criminologists can’t plot the numbers on a chart.

Besides being wrong about rape (it has declined a lot, but it’s still high compared with most countries), this was a funny statement about science (I’ve heard we can even plot negative numbers now!). But the point is we have problems understanding, and communicating about, small things.

So, back to names.

In 2009, the peak year for the name Malia in the U.S., 1,681 girls were given that name, according to the Social Security Administration, or .041% of the 4.14 million children born that year (there are no male Malias in the SSA’s public database, meaning they have never recorded more than 4 in one year). That year, 7.5% of women ages 18-44 had a baby. If my arithmetic is right, say you know 100 women ages 18-44, and each of them knows 100 others (and there is no overlap in your network). That would mean there is a 30% chance one of your 10,000 friends of a friend had a baby girl and named her Malia in 2009. But probably there is a lot of overlap; if your friend-of-friend network is only 1,000 women 18-44 then that chance would fall to 3%.

Here is the trend in girls named Malia, relative to the total number of girls born, from 1960 to 2016:

names.xlsx

To make it easier to see the Malias, here is the same chart with the y-axis on a log scale.

names.xlsx

This shows that Malia has been on a long upward trend, from less than 50 per year in the 1960s to more than 1,000 per year now. And it also shows a pronounced spike in 2009, the year Malia peaked .041%. In that year, the number of people naming daughters Malia jumped 75% before declining over the next three years to resume it’s previous trend. Here is the detail on the figure, just showing the Malia in 2005-2016:

names.xlsx

What happened there? We can’t know for sure. Even if you asked everyone why they named their kid what they did, I don’t know what answers you would get. But from what we know about naming patterns, and their responsiveness to names in the news (positive or negative), it’s very likely that the bump in 2009 resulted from the high profile of Barack Obama and his daughter Malia, who was 11 when Obama was elected.

What does a causal statement like that that really mean? In 2009, it looks to me like about 828 more people named their daughters Malia than would have otherwise, taking into account the upward trend before 2008. Here’s the actual trend, with a simulated trend showing no Obama effect:

names.xlsx

Of course, Obama’s election changed the world forever, which may explain why the upward trend for Malia accelerated again after 2013. But in this simple simulation, which brings the “no Obama” trend back into line with the actual trend in 2014, there were 1,275 more Malias born than there would have been without the Obama election. This implies that over the years 2008-2013, the Obama election increased the probability of someone naming their daughter Malia by .00011, or .011%.

That is a very small effect. I think it’s real, and very interesting. But what does it mean for anything else in the world? This is not a question of statistical significance, although those tools can help. (These names aren’t a probability sample, it’s a list of all names given.) So this is a question for interpreting research findings now that we have these incredibly powerful tools, and very big data to analyze with them. The number alone doesn’t tell the story.

On artificially intelligent gaydar

A paper by Yilun Wang and Michal Kosinski reports being able to identify gay and lesbian people from photographs using “deep neural networks,” which means computer software.

I’m not going to describe it in detail here, but the gist of it is they picked a large sample of people from a dating website who said they were looking for same-sex partners, and an equal number that were looking for different-sex partners, and trained their computers to learn the facial features that could distinguish the two groups (including facial structure measurements as well as grooming things like hairline and facial hair). For a deep dive on the context of this kind of research and its implications, and more on the researchers and the controversy, please read this post by Greggor Mattson first. These notes will be most useful after you’ve read that.

I also reviewed a gaydar paper five years ago, and some of the same critiques apply.

This figure from the paper gives you an idea:

gd4

These notes are how I would start my peer review, if I was peer reviewing this paper (which is already accepted and forthcoming in the Journal of Personality and Social Psychology — so much for peer review [just kidding it’s just a very flawed system]).

The gay samples here are “very” gay, in the sense of being out and looking for same-sex partners. This does not mean that they are “very” gay in any biological, or born-this-way sense. If you could quantitatively score people on the amount of their gayness (say on some kind of scale…), outness and same-sex attraction might be correlated, but they are different things. The correlation here is assumed, and assumed to be strong, but this is not demonstrated. (It’s funny that they think they address the problem of the sample by comparing the results with a sample from Facebook of people who like pages such as “I love being gay” and “Manhunt.”)

Another way of saying this is that the dependent variable is poor defined, and then conclusions from studying it are generalized beyond the bounds of the research. So I don’t agree that the results:

provide strong support provide strong support for the PHT [prenatal hormone theory], which argues that same-gender sexual orientation stems from the underexposure of male fetuses and overexposure of female fetuses to prenatal androgens responsible for the sexual differentiation of faces, preferences, and behavior.

If it were my study I might say the results are “consistent” with PHT theory, but it would be better to say, “not inconsistent” with the theory. (There is no data about hormones in the paper, obviously.)

The authors give too much weight to things their results can’t say anything about. For example, gay men in the sample are less likely to have beards. They write:

nature and nurture are likely to be as intertwined as in many other contexts. For example, it is unclear whether gay men were less likely to wear a beard because of nature (sparser facial hair) or nurture (fashion). If it is, in fact, fashion (nurture), to what extent is such a norm driven by the tendency of gay men to have sparser facial hair (nature)? Alternatively, could sparser facial hair (nature) stem from potential differences in diet, lifestyle, or environment (nurture)?

The statement is based on the faulty premise that they are “nature and nurture are likely to be as intertwined.” They have no evidence of this intertwining. They could just as well have said “it’s possible nature and nurture are intertwined,” or, with as much evidence, “in the unlikely event nature and nurture are intertwined.” So they loaded the discussion with the presumption of balance between nature and nurture, and then go on to speculate about sparse facial hair, for which they also have no evidence. (This happens to be the same way Charles Murray talks about race and IQ: there must be some intertwining between genetics and social forces, but we can’t say how much; now let’s talk about genetics because it’s definitely in there.)

Aside from the flaws in the study, the accuracy rate reported is easily misunderstood, or misrepresented. To choose one example, the Independent wrote:

According to its authors, who say they were “really disturbed” by their findings, the accuracy of an AI system can reach 91 per cent for homosexual men and 83 per cent for homosexual women.

The authors say this, which is important but of course overlooked in much of the news reporting:

The AUC = .91 does not imply that 91% of gay men in a given population can be identified, or that the classification results are correct 91% of the time. The performance of the classifier depends on the desired trade-off between precision (e.g., the fraction of gay people among those classified as gay) and recall (e.g., the fraction of gay people in the population correctly identified as gay). Aiming for high precision reduces recall, and vice versa.

They go on to give a technical, and I believe misleading example. People should understand that the computer was always picking between two people, one of whom was identified as gay and the other not. It had a high percentage chance of getting that choice right. That’s not saying, “this person is gay”; it’s saying, “if I had to choose which one of these two people is gay, knowing that one is, I’d choose this one.” What they don’t answer is this: Given 100 random people, 7 of whom are gay, how many would the model correctly identify yes or no? That is the real life question most people probably think the study is answering.

As technology writer Hal Hodson pointed out on Twitter, if someone wanted to scan a crowd and identify a small number individuals who were likely to be gay (and ignoring many other people in the crowd who are also gay), this might work (with some false positives, of course).

gd1

Probably someone who wanted to do that would be up to no good, like an oppressive government or Amazon, and they would have better ways of finding gay people (like at pride parades, or looking on Facebook, or dating sites, or Amazon shopping history directly — which they already do of course). Such a bad actor could also train people to identify gay people based on many more social cues; the researchers here compare their computer algorithm to the accuracy of untrained people, and find their method better, but again that’s not a useful real-world comparison.

Aside: They make the weird but rarely-necessary-to-justify decision to limit the sample to White participants (and also offer no justification for using the pseudoscientific term “Caucasian,” which you should never ever use because it doesn’t mean anything). Why couldn’t respondents (or software) look at a Black person and a White person and ask, “Which one is gay?” Any artificial increase in the homogeneity of the sample will increase the likelihood of finding patterns associated with sexual orientation, and misleadingly increase the reported accuracy of the method used. And of course statements like this should not be permitted: “We believe, however, that our results will likely generalize beyond the population studied here.”

Some readers may be disappointed to learn I don’t think the following is an unethical research question: Given a sample of people on a dating site, some of whom are looking for same-sex partners and some of whom are looking for different-sex partners, can we use computers to predict which is which? To the extent they did that, I think it’s OK. That’s not what they said they were doing, though, and that’s a problem.

I don’t know the individuals involved, their motivations, or their business ties. But if I were a company or government in the business of doing unethical things with data and tools like this, I would probably like to hire these researchers, and this paper would be good advertising for their services. It would be nice if they pledged not to contribute personally to such work, especially any efforts to identify people’s sexual orientation without their consent.

The sky is falling because of feminist biology, Factual Feminist edition

The other day I explained why, despite her mocking tone,  the “Factual Feminist” (Christina Sommers) doesn’t have the factual basis to undermine commonly-used statistics on rape. Now she has a video out on “feminist science.” No, it’s not a joke from The Simpsons, she says:

A new feminist biology program at the University of Wisconsin is all too real… Is feminist biology likely to contribute to our knowledge and understanding of the world? The Factual Feminist is skeptical.

The program in question is really just a post-doctoral fellowship. It looks like a privately-endowed fund to hire one postdoc. This is not a major curriculum intervention. The first postdoc in the program is Caroline VanSickle, a biological anthropologist from the University of Michigan who does work on ancient female pelvic bones and their implications for birth stuff. She was quoted by the right-wing Campus Reform (a project of the Leadership Institute) this way:

“We aren’t doing science well if we ignore the ideas and research of people who aren’t male, white, straight, or rich,” VanSickle said in an email to Campus Reform. “Feminist science seeks to improve our understanding of the world by including people with different viewpoints. A more inclusive science means an opportunity to make new discoveries.”

I don’t know the evidence on whether the ideas of biologists who aren’t male, White straight, or rich are ignored in science today, but this sentiment seems unobjectionable to me – we aren’t doing science well if we ignore anyone’s (good) ideas. Who could object to “including people with different viewpoints”? But Sommers, for some reason misquoting her only source for the story, says,

She explained to Campus Reform that, quote, in order to do science well, she said, we can’t ignore the ideas and research of people who just don’t happen to be male. But wait a minute. Women are hardly ignored in biology. In fact, they have far surpassed men in earning biology degrees. What is more, women are flourishing, and winning Nobel Prizes in that field.

On the screen flashes a table showing women getting 61% of BA degrees in biology, 59% of MAs, and 54% of PhDs. If we’re talking about whether women are ignored in biology, I think it’s the PhDs that matter, so 54% is not quite “far surpassed.” More to the point, although women first surpassed men in receiving biology BA degrees in 1988 — a quarter of a century ago — they are currently only 23% of full professors in biology. I’m not arguing about whether this reflects job discrimination against female biologists. The point is that if only a small minority of the most influential biologists are women, and if there are common differences in how men and women do biology, then the views of the latter are going to be less well represented.

To show overblown this worry is, Sommers then flashes this image of all those women winning Nobel Prizes in “that field” (actually the prizes are for “Physiology and Medicine,” since there is no Nobel for biology):

womennobels

Those women sure seem to be flourishing. And that’s every woman who ever won a Nobel in Physiology and Medicine — all 10 of them. Since the 1940s, when the first of these women flourished, men have been awarded 162 Nobels in that field — the other 94% of the prizes. The peak decade was in the 2000s, when women won 15% of the prizes (the most recent in 2009).

At Wisconsin, the single “feminist biology” postdoc will also develop an undergraduate course in gender and biology. This seems like a fine idea. Maybe it will encourage even more women to overrun the biological sciences. Call me naive, but we’re still not exactly drowning in female biologists.

After going on to pick on a few individual feminists, Sommers concludes that:

…feminist theory [has] been built on a foundation of paranoia about the patriarchy, half-truths, untruths, oversimplifications, and it’s immune to correction.

Raising the question: If feminism is rubber, and the Factual Feminist is glue, does what she say bounce of feminism and stick to her?

Full disclosure: My mother is a biologist. And a feminist. So you know I’m right. And objective.

Fundamentally opposed to science?

Conservative religious fundamentalists really don’t trust the scientific establishment.

In the discussion of academia’s liberalism, we should also consider the public’s mistrust of science, especially the conservative and fundamentalist public. Why would people who don’t trust science become scientists?

Last year Gordon Gauchat reported in American Sociological Review that Americans’ trust in the scientific community was holding steady except for political conservatives and those who attend church regularly, and that the trend was not explained by the lower education levels of conservatives or religious people (in fact, educated conservatives expressed the lowest levels of trust in science). His conclusion was that the trend showed the politicization of science, which is not the way modernity is supposed to go.

In response, Darren Sherkat blogged that Gauchat underestimated the importance of religion in explaining conservatives’ opposition to science because he only used the General Social Survey’s measure of the frequency of religious attendance instead of a measure of beliefs. And he provided a chart from the GSS showing that religious fundamentalists had lower trust in science whether they were Republicans or not. Sherkat wrote:

Any social scientist who studies politics, religion, and science should know that the reason why Republicans are at war against science is to court the vote of fundamentalist Christian simpletons who are opposed to science and reason. … What drives Republican opposition to science is that more Republicans are fundamentalists who believe that the Bible is the literal word of god.

You got your fundamentalism in my conservatism

As I look at it, conservatism and fundamentalism are both at fault. My take on the trends shows that, in addition to the growing divide between politically conservative fundamentalists and politically liberal non-fundamentalists, liberal fundamentalists have grown more trusting of science, while conservative non-fundamentalists have grown less trusting.

I used the GSS from 1974 through the latest 2012 survey. To highlight the polarization I show only those who are “extremely liberal,” “liberal,” “conservative,” or “extremely conservative,” leaving out those who are “slightly” liberal or conservative, or moderate. So this is not the whole population (I’ll return to that below).

The question was:

I am going to name some institutions in this country. As far as the people running these institutions are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them? … Scientific community.

It’s as close as we get to a question about science itself. For fundamentalism, GSS asked whether the respondent’s religion was fundamentalist, moderate, or liberal. I dichotomized it to fundamentalists versus everyone else (including people with no religion).*

These are the people expressing a great deal of confidence in the scientific community:

confidence-in-scienceThese trends are heavily smoothed (down to four decades), because the numbers bounce around a lot from year to year, as the samples are only between 60 and 220 in each cell in the individual years. To do a simple test of the trends, I ran a regression using time and interactions between time and politics-fundamentlism dummy variables, with controls for age and sex (old people and men hate science more than regular people, net of religion and politics).

The regression confirms what the graph shows: significant declines in trust among conservatives whether fundamentalist or not, and an increase in trust among liberal fundamentalists. The trend for liberal non-fundamentalists was flat. (Details on request.)

I left out of that analysis the people who were slightly conservative, moderate, or slightly liberal. That’s a shrinking majority of the population, which breaks down like this from the 1970s to the last decade (click to enlarge):

confidence-in-science-popsSo the bad news for science is that the increasingly anti-science groups are increasing in the population: conservative fundamentalists and non-fundamentalists. The big green majority is not growing more or less anti-science (even when you break it down by fundamentalism), but it’s also shrinking. The liberal fundamentalists are getting more into science, but also vanishing.

Just wait till they find out (some) sociology is part of the “scientific community.”

Note: This is a blog-post, not peer-reviewed research. I might be wrong.

* Skerkat instead uses a question about how to interpret the Bible instead of the fundamentalism question (literal word of God, inspired word of God, book of fables). 95% of the people who described themselves as having a “fundamentalist” describe the Bible as either the literal or the inspired word of God.