What the editors of 6000 journals tell us about gender, international diversity, open access, and research transparency

Micah Altman and I have written a paper using the new Open Editors dataset from Andreas Pacher, Tamara Heck, and Kerstin Schoch. They scraped up data on almost half a million editors (editors in chief, editors, editorial board member) at more than 6000 journals from 17 publishers (most of the big ones; they’ve since added some more). Micah and I genderized them (fuzzily), geolocated them in countries, and then coded the journals as either open access or not (using the Directory of Open Access Journals), and according to whether they practice transparency in research (using the Transparency and Openness Promotion signatories). After just basic curiosity about diversity, we wondered whether those that practice open access and research transparency have better gender and international diversity.

The results show overwhelming US and European dominance, not surprisingly. And male dominance, which is more extreme among editors in chief, across all disciplines. Open access journals are a little less gender diverse, and transparency-practicing journals a little more internationally diverse, but those relationships aren’t strong. There are other differences by discipline. A network analysis shows not much overlap between journals, outside of a few giant clusters (which might indicate questionable practices) although it’s hard to say for sure — journals should really use ORCIDs for their editors. Kudos to Micah for doing the heavy lifting on the coding, which involved multiple levels of cleaning and recoding (and for making the R markdown file for the whole thing available).

Lots of details in the draft, here. Feedback welcome!

Here are the editors, by country:

Basic self-promotion

Five years ago today I wrote a post called “Basic self promotion” on here. There has been a lot of work and advice on this subject in the intervening years (including books, some of which I reviewed here). So this is not as necessary as it was then. But it holds up pretty well, with some refreshing. So here is a lightly revised version. As always, happy to have your feedback and suggestions in the comments — including other things to read.


48943567406_2ccfa2b882_k
Present yourself. PN Cohen photo: https://flic.kr/p/2hyYzqs.

If you won’t make the effort to promote your research, how can you expect others to?

These are some basic thoughts for academics promoting their research. You don’t have to be a full-time self-promoter to improve your reach and impact, but the options are daunting and I often hear people say they don’t have time to do things like run a Twitter account or write for blogs and other publications. Even a relatively small effort, if well directed, can help a lot. Don’t let the perfect be the enemy of the good. It’s fine to do some things pretty well even if you can’t do everything to your ideal standard.

It’s all about making your research better — better quality, better impact. You want more people to read and appreciate your work, not just because you want fame and fortune, but because that’s what the work is for. I welcome your comments and suggestions below.

Present yourself

Make a decent personal website and keep it up to date with information about your research, including links to freely available copies of your publications (see below). It doesn’t have to be fancy. I’m often surprised at how many people are sitting behind years-old websites. (I recently engaged Brigid Barrett, who specializes in academics’ websites, to redesign mine.)

Very often people who come across your research somewhere else will want to know more about you before they share, report on, or even cite it. Your website gives your work more credibility. Has this person published other work in this area? Taught related courses? Gotten grants? These are things people look for. It’s not vain or obnoxious to present this information, it’s your job. I recommend a good quality photo, updated at least every five years.

Make your work available

Let people read the actual research. For work not yet “published” in journals, post drafts when they are ready for readers (a good time is when you are ready to send it to a conference or journal – or earlier if you are comfortable with sharing it). This helps you establish precedence (planting your flag), and allows it to generate feedback and attract readers. It’s best to use a disciplinary archive such as SocArXiv (which, as the director, I highly recommend) or your university repository, or both. This will improve how they show up in web searches (including Google Scholar) indexed for things like citation or grant analysis, and archived. You can also get a digital object identifier (DOI), which allows them to enter the great stream of research metadata. (See the SocArXiv FAQ for more answers.)

When you do publish in journals, prefer open-access journals because it’s the right thing to do and more people can read your work there. If a paper is paywalled, share a preprint or postprint version. On your website or social media feeds, please don’t just link to the pay-walled versions of your papers, that’s the click of death for someone just browsing around, plus it’s elitist and antisocial. You can almost always put up a preprint without violating your agreements (ideally you wouldn’t publish anywhere that won’t let you do this). To see the policies of different journals regarding self-archiving, check out the simple database at SHERPA/RoMEO, or, of course, the agreement you signed with the journal.

I oppose private sites like Academia.edu, ResearchGate, or SSRN. These are just private companies making a profit from doing what your university and its library, and nonprofits like SocArXiv are already doing for the public good. Your paper will not be discovered more if it is on one of these sites.

I’m not an open access purist, believe it or not. (If you got public money to develop a cure for cancer, that’s different, then I am a purist.) Not everything we write has to be open access (books, for example), but the more it is the better, especially original research. This is partly an equity issue for readers, and partly to establish trust and accountability in all of our work. Readers should be able to see our work product – our instruments, our code, our data – to evaluate its veracity (and to benefit their own work). And for the vast majority of readers who don’t want to get into those materials, the fact they are there increases our collective accountability and trustworthiness. I recommend using the Open Science Framework, a free, nonprofit platform for research sharing and collaboration.

Actively share your work

In the old days we used to order paper reprints of papers we published and literally mail them to the famous and important people we hoped would read and cite them. Nowadays you can email them a PDF. Sending a short note that says, “I thought you might be interested in this paper I wrote” is normal, reasonable, and may be considered flattering. (As long as you don’t follow up with repeated emails asking if they’ve read it yet.)

Social media

If you’re reading this, you probably use at least basic social media. If not, I recommend it. This does not require a massive time commitment and doesn’t mean you have to spend all day doomscrolling — you can always ignore them. Setting up a public profile on Twitter or a page on Facebook gives people who do use them all the time a way to link to you and share your profile. If someone wants to show their friends one of my papers on Twitter, this doesn’t require any effort on my part. They tweet, “Look at this awesome new paper @familyunequal wrote!” (I have some vague memory of this happening with my papers.) When people click on the link they go to my profile, which tells them who I am and links to my website.

Of course, a more active social media presence does help draw people into your work, which leads to exchanging information and perspectives, getting and giving feedback, supporting and learning from others, and so on. Ideally. But even low-level attention will help: posting or tweeting links to new papers, conference presentations, other writing, etc. No need to get into snarky chitchat and following hundreds of people if you don’t want to. To see how sociologists are using Twitter, you can visit the list I maintain, which has more than 1600 sociologists. This is useful for comparing profile and feed styles.

Other writing

People who write popular books go on book tours to promote them. People who write minor articles in sociology journals might send out some tweets, or share them with their friends on Facebook. In between are lots of other places you can write something to help people find and learn about your work. I still recommend a blog format, easily associated with your website, but this can be done different ways. As with publications themselves, there are public and private options, open and paywalled. Open is better, but some opportunities are too good to pass up – and it’s OK to support publications that charge subscription or access fees, if they deserve it.

There are also good organizations now that help people get their work out. In my area, for example, the Council on Contemporary Families is great (I’m a former board member), producing research briefs related to new publications, and helping to bring them to the attention of journalists and editors. Others work with the Scholars Strategy Network, which helps people place Op-Eds, or the university-affiliated site The Society Pages, or others. In addition, there are blogs run by sections of the academic associations, and various group blogs. And there is Contexts (which I used to co-edit), the general interest magazine of ASA, where they would love to hear proposals for how you can bring your research out into the open (for the magazine or their blog).


For more on the system we use to get our work evaluated, published, transmitted, and archived, I’ve written this report: Scholarly Communication in Sociology: An introduction to scholarly communication for sociology, intended to help sociologists in their careers, while advancing an inclusive, open, equitable, and sustainable scholarly knowledge ecosystem.

Policy implications are discussed (often to poor effect, in sociology journals)

Commentary, data, suggestions.

Watch where you’re going. (PNC photo: https://flic.kr/p/2gRHfd5.)

The ritualistic invocation of “policy implications” in sociology writing is puzzling. I don’t know its origin, but it appears to have come (like so much else that we cherish because we despise ourselves) from economists. The Quarterly Journal of Economics was the first (in the JSTOR database) to use the term in an abstract, including it 11 times over the 1950s and 1960s before the first sociology journal (Journal of Health and Social Behavior) finally followed suit in 1971.

That 1971 article projected a tone that persists to this day. In a paragraph tacked onto the end of the paper, Kohn and Mercer speculated that inflated claims about the dangers of marijuana “may actually contribute to dangerous forms of drug abuse among less well-educated youth” (although the paper was a survey of college students). “If this is the case,” they continued, “then the best corrective may be to revise law, social policy, and official information in line with the best current scientific knowledge about drugs and their effect.” The analysis in the paper had nothing to do with anti-drug policy, instead pursuing an interesting empirical examination of the relationship of ideology (rebellious versus authoritarian) and drug use. The “implications” are vague and unconnected to any actually-existing policy debate (and none is cited). Being in this case both banal and hopelessly idealistic — intellectual bedfellows that find themselves miserably at home in the sociological space many in the public deride as “academic” — it’s hard to imagine the paper having any policy effect. Not that there’s anything wrong with that.

Fifty years later, “policy implications” has become an institution in academic sociology — by no means universal, but a fixed feature of the landscape, demanded by some editors, reviewers, advisors, and funders. The prevalence of this trope coincides with the imperative for “engagement” (which I’ve written about previously) driven both by our internal sense of mission and our capitulation to external pressure to justify the existence of our work. These are admirable impulses, but they’re poorly served by many of our current practices. I hope this discussion of “policy implications,” and the suggestions that follow, help push us toward more productive responses.

How it’s done

Most sociologists don’t do a lot of policy work. It’s not our language or social or professional milieu, and often not part of our formal training. So what do we mean, in theory and practice, when we offer “policy implications” for our research? There is a very wide range of applications, from evaluations of specific local policies to critiques of state power itself. I collected a lot of examples which I’ll describe, but first a very prominent one, from “Social Conditions as Fundamental Causes of Health Inequalities: Theory, Evidence, and Policy Implications, by Phelan, Link, and Tehranifar (2010). Their promise of policy implications is right in the title. From the policy implications section, here is a list of policies intended to reduce inequality in social conditions:

“Policies relevant to fundamental causes of disease form a major part of the national agenda, whether this involves the minimum wage, housing for homeless and low-income people, capital-gains and estate taxes, parenting leave, social security, head-start programs and college-admission policies, regulation of lending practices, or other initiatives of this type.”

Then, in the conclusion, they explain that in addition to leveling inequalities in social condition, we need policies that “minimiz[e] the extent to which socioeconomic resources buy a health advantage” — in the U.S. context, this is interpretable as universal healthcare.

These are almost broad enough — considered together — to constitute a worldview (or perhaps a party platform) rather than a specific policy prescription. If this were actual policy analysis, we would have to be concerned with, for example, the extent which policies to raise the minimum wage, raise taxes, house the homeless, and expand educational opportunity actually produce reductions in inequality, and which of these is most effective, or important, or feasible, and so on. But this is not policy analysis, and none is cited. These are one step down from documenting wage disparities and offering socialism as “policy implications.” This is a review paper, mostly theory and summarizing existing evidence — which makes it more suitable than the implications attached to many narrow empirical papers (see below). It has been very influential, influencing thousands of students and researchers, and maybe people in policy settings as well (one could try to assess that), by helping to establish the connection between health inequality and inequality on other dimensions. Important work. But the way I read the term, this is too broad to be reduced to “policy implications” — it’s more like social implications, or theoretical implications.

127 more examples

To generalize about the practice of “policy implications,” I collected some data. I used a “topic” search in Web of Science, which searches title, abstract, and keywords, for the phrase “policy implications,” in articles from 2010 to 2020. This tree map from WoS shows the disciplinary breakdown of the journals with the search term, which remains dominated by economics.

I chose the sociology category, then weeded out journals that were very interdisciplinary (like Journal of Marriage and Family), and some articles that turned out to be false positives, and ended up with 127 articles in these 52 journals.*

First I read all the abstracts and came up with a three-category code for abstracts that (1) had specific policy implications, (2) made general policy pronouncements, or (3) just promised policy implications. Here are some details.

Of the 127 abstracts, only two had what I read as specific policy implications. Martin (2018) wrote, “for dietary recommendations to transform eating practices it is necessary to take into account how, while cooking, actors draw on these various forms of valuation.” And Andersen and van de Werfhorst (2010) wrote, “strengthening the skill transparency of the education system by increasing secondary and tertiary-level differentiation may strengthen the relationship between education and occupation.” These aren’t as specific as particular pieces of legislation or policies, but close enough.

I put 29 papers in the general pronouncements category. For example, I put Phalen, Link, and Tehranifar (2010) in this category. In another, Wiborg and Hansen (2018) wrote that their findings implied that “increasing equal educational opportunities do not necessarily lead to greater opportunities in the labor market and accumulation of wealth” (reading inside the paper confirmed this is the extent of the discussion). This by Stoilova, Ilieva-Trichkova, and Bieri (2020) is archetypal: “The policy implications are to more closely consider education in the transformation of gender-sensitive norms during earlier stages of child socialization and to design more holistic policy measures which address the multitude of barriers individuals from poor families and ethnic/migrant background face” (reading inside the paper, there are several other statements at the same level). I read three other papers in this category and found similar general implications, e.g., “if the policy goal is to enhance the bargaining position of labour and increase its share of income, spending policy should prioritise the expenditures on the public sector employment” (Pensiero 2017).

“Policy implications are discussed”

The largest category, 97 papers (76%) offered no policy implications in the abstract, but rather offered some version of “policy implications are discussed.” It is an odd custom, to mention the existence of a section in the paper without divulging its contents. Anyway, to get a better sense of what “policy implications are discussed” means, I randomly sampled 10 of the papers in this category to read the relevant section. (I have no beef with these papers or their authors, they were selected randomly, and I’m only commented on what may be the least important aspect of their contributions.)

The first category among these, with 5 of the 10 papers, are those without substantive policy contributions. Some have banal statements at the end, which the author and most readers probably already believed, such as, “If these results are replicated, programs should be implemented that will solicit the help of grandparents in addition to parents” (Liu 2016). I also include here Visser et al. (2013), who conclude that their “findings show general support for basic ecological perspectives of fear of crime and feelings of unsafety,” e.g., that reducing crime in the absence of better social protection will not improve levels of fear and feelings of unsafety. I code this one as without substantive policy contribution because that’s a big claim about the entire state policy structure, which would require much more evidence to adjudicate, much less implement, and the paper offers only a small empirical nudge in one direction (which, again, is fine!).

Several in this category offered essentially no policy implications. This includes Wang (2010) who states at the outset that, “the question of motives for private transfers is one with important policy implications” for public transfer programs like food stamps and social security, but never comes back to discuss policies relevant to the results. And Barrett and Pollack (2011), who recommend that health practitioners develop better understanding of the issues raised and that “contemporary sexual civil rights efforts” pay more attention to sexual discrimination. Finally, Lepianka (2015), reports on media depictions of poverty and related policy, but doesn’t offer any implications of the study itself for policy. So, half of these had abstracts that were overpromising in terms of policy.

The other 5 papers do include substantive policy implications, explored to varying degrees. One is hard-hitting but brief: Shwed et al. (2018), whose analysis has direct implications which they do not thoroughly discuss. Their “unequivocal” result is that “multicultural schools, with their celebration of difference, entail a cost in terms of social integration compared to assimilationist schools—they enhance ethnic segregation in friendship networks. … While empowering minorities, it enhances social closure between groups.” The empirical analysis they did could no doubt be used as part of a policy analysis on the question of cultural orientation of schools in Israel.

Three offer sustained policy discussions, including the very specific: an endorsement of prison-based dog training programs (Antonio et al. 2017); a critique of sow-housing policy in the European Union (de Krom 2015); and recommendations for environmental lending practices at the World Bank (Sommer at al. 2017). The last one qualifies, albeit at a very macro level: Gauchat et al.’s (2011) analysis of economic dependence on military spending in metropolitan areas, the implications of which surpass everyday policy debates but are of course relevant.

To summarize my reading, with percentages based on extrapolating my subsample (so, wide confidence intervals): 23% of papers promising policy implications had none, and 38% had either vague statements or general statements that did not rely on empirical findings in the paper. The remaining 40% had substantive policy discussion and/or specific recommendations.

This is a quick coding and not validated. Others might treat differently papers that report an effect and then recommend changing the prevalence of the independent variable — e.g., poverty causes poor health; policies should reduce poverty — which I coded as not substantive. For example, I coded this from Thoits (2010) as not substantive or specific: “policies and programs should target children who are at long-term health risk due to early exposure to poverty, inadequate schools, and stressful family circumstances.” You could say, “policies should attempt to make life better,” but it’s not clear you need research for that. Anyway, my own implications (below) don’t depend on a precise accounting.

Implications

I am really, really not saying these are bad papers, or wrong to do what they did. I am not criticizing them, but rather the institutional convention that classifies the attempt to make our research relevant as “policy implications,” even when we have nothing specific to say about real policies, and then rewards sociologists for shoehorning their conclusions into such a frame.

Let me give an example of an interesting and valuable paper that is burdened by its policy implications. “The impact of parental migration on depression of children: new evidence from rural China,” by Yue et al. (2020) used a survey of families in China to assess the relationship between parental migration, remittances, household labor burdens, parent-child communication, and children’s symptoms of depression. After regression models with direct and indirect effects on children’s depression, including both children who were “left behind” by migrating parents and those who weren’t, they conclude: “non-material resources (parent-child communication, parental responsiveness, and self-esteem) play a much more important role in child’s depression than material resources (remittances).” Interesting result. Seems well done, with good data. The policy suggestions that follow are to encourage parent-child communication (e.g., through local government programs) and teach children in school that they are not abandoned by parents who migrate.

What is wrong with this? First, Yue et al. (2020) is an example of a common model that amounts to, “based on our regressions, more of this variable would be good.” It seems logical, but a serious approach to the question would have to be based on evidence that such programs actually have their intended effect, and that they would be better than directing the money or other resources toward something else. That would be an unreasonable burden for the authors, and slow the production of useful empirical results. So we’re left with something superficial that distracts more than it adds. Further (and here I hope to win some converts to my view), these policy implication sections are a major source of of peer review friction — reviewers demanding them, reviewers hating them, authors contorting themselves, and so on. Much better, in my view, would be to just add the knowledge produced by papers like this to the great hopper of knowledge, and let it contribute to a real policy analysis down the road.

Empirical peer-reviewed sociology articles should be shorter, removing non-essential parts of the paper that are major sources of peer review bog-down. Having different kinds of work reviewed and approved together in a single paper — a lengthy literature review, a theoretical claim, an empirical analysis, and a set of policy implications — creates inefficiencies in the peer review process. Why should a whole 60-page paper be rejected because one part of it (the policy implications, say) is rejected by one out of three reviewers? This is very wasteful. It puts reviewers in a position to review aspects of the work they aren’t qualified to judge. And it skews incentives by rewarding the less important parts of our work. Of course it’s reasonable to spend a few paragraphs stating the relevance of the question in the paper, but not a whole treatise (in the front and back) of every paper.

Advice for sociologists

1. Don’t try to pin big conclusions on a single piece of peer reviewed empirical research. That’s a sad legacy of a time when publishing was hard, sociologists had few opportunities to do so, and peer reviewed journals were the source of validation we were expected to rely on. So you devoted years of your life to a small number of “publications,” and those were the sum total of your intellectual production. We have a lot of other ways to express our social and political views now, and we should use them. The fact that you have a PhD, a job, and have published peer reviewed research, are all sources of legitimacy you can draw on to get people to pay attention to your writing.

2. Write for the right audience. If you are serious about influencing policy, write for staffers doing research for advocacy organizations, activists, or campaigns. If you want to influence the public, write in lay terms in venues that draw regular people as readers. If you want to set the agenda for funding agencies, write review pieces that synthesize research and make the case for moving in the right direction. These are all different kinds of writing, published in different venues. Crucially, none of them rely only on the empirical results of a single analysis, nor should they. The last three paragraphs of your narrow empirical research paper — excellent, important, and cutting-edge as it is — will not reach these different audiences.

3. Stop asking researchers to tack superficial policy implications sections onto the end of their papers. If you are a reviewer or an editor, stop demanding longer literature reviews and conclusions. Start rewarding the most important part of the work, the part you are qualified to evaluate.

4. If you are in an academic department, on a hiring committee, or on a promotion and tenure committee, look at the whole body of work, including the writing outside peer-reviewed journals. No one expects to get tenure from writing an op-ed, but people who work to reach different audiences may be building a successful career in which peer-reviewed research is a foundational building block. Look for the connections, and reward the people who make them.


*The full sample (metadata and abstracts) is available on Zotero. Some are open access, some I got through my library, but all are available from from Sci-Hub (which steals them so you don’t have to).

References mentioned in the text:

Andersen, Robert, and Herman G. van de Werfhorst. 2010. “Education and Occupational Status in 14 Countries: The Role of Educational Institutions and Labour Market Coordination.” British Journal of Sociology 61(2):336–55. doi: 10.1111/j.1468-4446.2010.01315.x.

Antonio, Michael E., Rosalyn G. Davis, and Susan R. Shutt. 2017. “Dog Training Programs in Pennsylvania’s Department of Corrections Perceived Effectiveness for Inmates and Staff.” Society & Animals 25(5):475–89. doi: 10.1163/15685306-12341457.

de Krom, Michiel P. M. M. 2015. “Governing Animal-Human Relations in Farming Practices: A Study of Group Housing of Sows in the EU.” Sociologia Ruralis 55(4):417–37. doi: 10.1111/soru.12070.

Gauchat, Gordon, Michael Wallace, Casey Borch, and Travis Scott Lowe. 2011. “The Military Metropolis: Defense Dependence in U.S. Metropolitan Areas.” City & Community 10(1):25–48. doi: 10.1111/j.1540-6040.2010.01359.x.

Kohn, Paul M., and G. W. Mercer. 1971. “Drug Use, Drug-Use Attitudes, and the Authoritarianism-Rebellion Dimension.” Journal of Health and Social Behavior 12(2):125–31. doi: 10.2307/2948519.

Lepianka, Dorota. 2015. “Images of Poverty in a Selection of the Polish Daily Press.” Current Sociology 63(7):999–1016. doi: 10.1177/0011392115587021.

Liu, Ruth X. 2018. “Physical Discipline and Verbal Punishment: An Assessment of Domain and Gender-Specific Effects on Delinquency Among Chinese Adolescents.” Youth & Society 50(7):871–90. doi: 10.1177/0044118X15618836.

Martin, Rebeca Ibanez. 2018. “Thinking with La Cocina: Fats in Spanish Kitchens and Dietary Recommendations.” Food Culture & Society 21(3):314–30. doi: 10.1080/15528014.2018.1451039.

Pensiero, Nicola. 2017. “In-House or Outsourced Public Services? A Social and Economic Analysis of the Impact of Spending Policy on the Private Wage Share in OECD Countries.” International Journal of Comparative Sociology 58(4):333–51. doi: 10.1177/0020715217726837.

Phelan, Jo C., Bruce G. Link, and Parisa Tehranifar. 2010. “Social Conditions as Fundamental Causes of Health Inequalities: Theory, Evidence, and Policy Implications.” Journal of Health and Social Behavior 51:S28–40. doi: 10.1177/0022146510383498.

Shwed, Uri, Yuval Kalish, and Yossi Shavit. 2018. “Multicultural or Assimilationist Education: Contact Theory and Social Identity Theory in Israeli Arab-Jewish Integrated Schools.” European Sociological Review 34(6):645–58. doi: 10.1093/esr/jcy034.

Sommer, Jamie M., John M. Shandra, and Michael Restivo. 2017. “The World Bank, Contradictory Lending, and Forests: A Cross-National Analysis of Organized Hypocrisy.” International Sociology 32(6):707–30. doi: 10.1177/0268580917722893.

Stoilova, Rumiana, Petya Ilieva-Trichkova, and Franziska Bieri. 2020. “Work-Life Balance in Europe: Institutional Contexts and Individual Factors.” International Journal of Sociology and Social Policy 40(3–4):366–81. doi: 10.1108/IJSSP-08-2019-0152.

Thoits, Peggy A. 2010. “Stress and Health: Major Findings and Policy Implications.” Journal of Health and Social Behavior 51:S41–53. doi: 10.1177/0022146510383499.

Visser, Mark, Marijn Scholte, and Peer Scheepers. 2013. “Fear of Crime and Feelings of Unsafety in European Countries: Macro and Micro Explanations in Cross-National Perspective.” Sociological Quarterly 54(2):278–301. doi: 10.1111/tsq.12020.

Wang, Jingshu. 2010. “Motives for Intergenerational Transfers: New Test for Exchange.” American Journal of Economics and Sociology 69(2):802–22. doi: 10.1111/j.1536-7150.2010.00725.x.

Wiborg, Oyvind N., and Marianne N. Hansen. 2018. “The Scandinavian Model during Increasing Inequality: Recent Trends in Educational Attainment, Earnings and Wealth among Norwegian Siblings.” Research in Social Stratification and Mobility 56:53–63. doi: 10.1016/j.rssm.2018.06.006.

Yue, Zhongshan, Zai Liang, Qian Wang, and Xinyin Chen. 2020. “The Impact of Parental Migration on Depression of Children: New Evidence from Rural China.” Chinese Sociological Review 52(4):364–88. doi: 10.1080/21620555.2020.1776601.

Don’t both-sides the war on truth

Glad to see political obituaries for Trump appearing. But don’t let them both-sides it. Case in point is George Packer’s “The Legacy of Donald Trump” in the Atlantic (online version titled, “A Political Obituary for Donald Trump“).

Packer is partly right in his comparison of Trump’s lies to those of previous presidents:

Trump’s lies were different. They belonged to the postmodern era. They were assaults against not this or that fact, but reality itself. They spread beyond public policy to invade private life, clouding the mental faculties of everyone who had to breathe his air, dissolving the very distinction between truth and falsehood. Their purpose was never the conventional desire to conceal something shameful from the public.

He’s right that the target is truth itself, but wrong to attribute this to postmodernism. Trump is well-grounded in modernist authoritarianism, albeit with contemporary cultural flourishes. This ground was well covered by Michiko Kakutani, Jason Stanley, and Adam Gopnik, who wrote the week before Trump’s inauguration:

there is nothing in the least “postmodern” about Trump. The machinery of demagogic authoritarianism may shift from decade to decade and century to century, taking us from the scroll to the newsreel to the tweet, but its content is always the same. Nero gave dictates; Idi Amin was mercurial. Instruments of communication may change; demagogic instincts don’t.

This distinction matters, between Trump the modern authoritarian and Trump the victim of a world gone mad. You can see why later in Packer’s piece, when he both-sides it:

Monopoly of public policy by experts—trade negotiators, government bureaucrats, think tankers, professors, journalists—helped create the populist backlash that empowered Trump. His reign of lies drove educated Americans to place their faith, and even their identity, all the more certainly in experts, who didn’t always deserve it (the Centers for Disease Control and Prevention, election pollsters). The war between populists and experts relieved both sides of the democratic imperative to persuade. The standoff turned them into caricatures.

Disagree. Public health scientists and political pollsters are sometimes wrong, and even corrupt, including during the Trump era, but their failures are not an assault on truth itself (I don’t know what about the CDC he’s referring to, but except for some behavior by Trump appointees the same applies). We in the rational knowledge business have not been relieved of our democratic imperatives by the machinations of authoritarians. No matter how we are seen by Trump’s followers, we are not caricatures. The rise of authoritarianism and its populist armies can’t be laid at the feet of the reign of experts. In one sense, of course, anti-vaxxers only exist because there are vaccines. But that’s not a both-sides story. Everyone alive today is alive because of the reign of experts, more of less.

This reminds me of Jonah Goldberg’s ridiculous (but very common, among conservatives) attempt to blame anti-racists for racism: “The grave danger, already materializing, is that whites and Christians respond to this bigotry [i.e., being called racist, homophobic, and Islamophobic] and create their own tribal identity politics.” If Packer objects to the comparison, that’s on him.

That said, the know-nothing movement that Trump now leads obviously creates direct challenges that the forces of truth must rise to meet. The imperative for “engagement” among social scientists — the need to communicate our research and its implications, which I’ve discussed before — is partly driven by this reality. In the social sciences we have an additional burden because our scholarship is directly relevant to politics, so compared with the other sciences we are subject to heightened scrutiny and suspicion — our accomplishments are less the invisible infrastructure of daily survival and more the contested terrain of social and cultural conflict.

And, judging by our falling social science enrollments (except economics), we’re not winning.

So we have a lot of work to do, but we’re not responsible for the war on truth.

Data analysis shows Journal Impact Factors in sociology are pretty worthless

The impact of Impact Factors

Some of this first section is lifted from my blockbuster report, Scholarly Communication in Sociology, where you can also find the references.

When a piece of scholarship is first published it’s not possible to gauge its importance immediately unless you are already familiar with its specific research field. One of the functions of journals is to alert potential readers to good new research, and the placement of articles in prestigious journals is a key indicator.

Since at least 1927, librarians have been using the number of citations to the articles in a journal as a way to decide whether to subscribe to that journal. More recently, bibliographers introduced a standard method for comparing journals, known as the journal impact factor (JIF). This requires data for three years, and is calculated as the number of citations in the third year to articles published over the two prior years, divided by the total number of articles published in those two years.

For example, in American Sociological Review there were 86 articles published in the years 2017-18, and those articles were cited 548 times in 2019 by journals indexed in Web of Science, so the JIF of ASR is 548/86 = 6.37. This allows for a comparison of impact across journals. Thus, the comparable calculation for Social Science Research is 531/271 = 1.96, and it’s clear that ASR is a more widely-cited journal. However, comparisons of journals in different fields using JIFs is less helpful. For example, the JIF for the top medical journal, New England Journal of Medicine, is currently 75, because there are many more medical journals publishing and citing more articles at higher rates, and more quickly than do sociology journals. (Or maybe NEJM is just that much more important.)

In addition to complications in making comparisons, there are problems with JIFs (besides the obvious limitation that citations are only one possible evaluation metric). They depend on what journals and articles are in the database being used. And they mostly measure short-term impact. Most important for my purposes here, however, is that they are often misused to judge the importance of articles rather than journals. That is, if you are a librarian deciding what journal to subscribe to, JIF is a useful way of knowing which journals your users might want to access. But if you are evaluating a scholar’s research, knowing that they published in a high-JIF journal does not mean that their article will turn out to be important. It is especially wrong to look at an article that’s old enough to have citations you could count (or not) and judge its quality by the journal it’s published in — but people do that all the time.

To illustrate this, I gathered citation data from the almost 2,500 articles published in 2016-2019 in 15 sociology journals from the Web of Science category list.* In JIF these rank from #2 (American Sociological Review, 6.37) to #46 (Social Forces, 1.95). I chose these to represent a range of impact factors, and because they are either generalist journals (e.g., ASR, Sociological Science, Social Forces) or sociology-focused enough that almost any article they publish could have been published in a generalist journal as well. Here is a figure showing the distribution of citations to those articles as of December 2020, by journal, ordered from higher to lower JIF.

After ASR, Sociology of Education, and American Journal of Sociology, it’s hard to see much of a slope here. Outliers might be playing a big role (for example that very popular article in Sociology of Religion, “Make America Christian Again: Christian Nationalism and Voting for Donald Trump in the 2016 Presidential Election,” by Whitehead, Perry, and Baker in 2018). But there’s a more subtle problem, which is the timing of the measures. My collection of articles is 2016-2019. The JIFs I’m using are from 2019, based on citations to 2017-2018 articles. These journals bounce around; for example, Sociology of Religion jumped from 1.6 to 2.6 in 2019. (I address that issue in the supplemental analysis below.) So what is a lazy promotion and tenure committee, which is probably working off a mental reputation map at least a dozen years old, to do?

You can already tell where I’m going with this: In these sociology journals, there is so much noise in citation rates within the journals, compared to any stable difference between them, that outside the very top the journal ranking won’t much help you predict how much a given paper will be cited. If you assume a paper published in AJS will be more important than one published in Social Forces, you might be right, but if the odds that you’re wrong are too high, you just shouldn’t assume anything. Let’s look closer.

Sociology failure rates

I recently read this cool paper (also paywalled in the Journal of Informetrics) that estimates the odds of this “failure probability,” the odds that your guess about which paper will be more impactful based on the journal title turns out to be wrong. When JIFs are similar, the odds of an error are very high, like a coin flip. “In two journals whose JIFs are ten-fold different, the failure probability is low,” Brito and Rodríguez-Navarro conclude. “However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.”

Their formulas look pretty complicated to me, so for my sociology approach I just did it by brute force (or if you need tenure you could call it a Monte Carlo approach). I randomly sampled 100,000 times from each possible pair of journals, then calculated the percentage of times the article with more citations was from a journal with a higher impact factor. For example, in 100,000 comparisons of random pairs sampled from ASR and Social Forces (the two journals with the biggest JIF spread), 73% of the time the ASR article had more citations.

Is 73% a lot? It’s better than a coin toss, but I’d hate to have a promotion or hiring decision be influenced by an instrument that blunt. Here are results of the 10.5 million comparisons I made (I love computers). Click to enlarge:

Outside of the ASR column, these are very bad; in the ASR column they’re pretty bad. For example, a random article from AJS only has more citations than one from the 12 lower-JIF journals 59% of the time. So if you’re reading CVs, and you see one candidate with a two-year old AJS article and one with a two-year-old Work & Occupations article, what are you supposed to do? You could compare the actual citations the two articles have gotten, or you could assess their quality of impact some other way. You absolutely should not just skim the CV and assume the AJS article is or will be more influential based on the journal title alone; the failure probability of that assumption is too high.

On my table you can also see some anomalies, of the kind which plague this system. See all that brown in the BJS and Sociology of Religion columns? That’s because both of those journals had sudden increases in their JIF, so their more recent articles have more citations, and most of the comparisons in this table (like in your memory, probably) are based on data from a few years before that. People who published in these journals three years ago are today getting an undeserved JIF bounce from having these titles on their CVs. (See the supplemental analysis below for more on this.)

Conclusion

Using JIF to decide which papers in different sociology journals are likely to be more impactful is a bad idea. Of course, lots of people know JIF is imperfect, but they can’t help themselves when evaluating CVs for hiring or promotion. And when you show them evidence like this, they might say “but what is the alternative?” But as Brito & Rodríguez-Navarro write: “if something were wrong, misleading, and inequitable the lack of an alternative is not a cause for continuing using it.” These error rates are unacceptably high.

In sociology most people won’t own up to relying on impact factors, but most people (in my experience) do judge research by where it’s published all the time. If there is a very big difference in status — enough to be associated with an appreciably different acceptance rate, for example — that’s not always wrong. But it’s a bad default.

In 2015 the biologist Michael Eisen suggested that tenured faculty should remove the journal titles from their CVs and websites, and just give readers the title of the paper and a link to it. He’s done it for his lab’s website, and I urge you to look at it just to experience the weightlessness of an academic space where for a moment overt prestige and status markers aren’t telling you what to think. I don’t know how many people have taken him up on it. I did it for my website, with the explanation, “I’ve left the titles off the journals here, to prevent biasing your evaluation of the work before you read it.” Whatever status I’ve lost I’ve made up for in virtue-signaling self-satisfaction — try it! (You can still get the titles from my CV, because I feel like that’s part of the record somehow.)

Finally, I hope sociologists will become more sociological in their evaluation of research — and of the systems that disseminate, categorize, rank, and profit from it.

Supplemental analysis

The analysis thus far is, in my view, a damning indictment of real-world reliance on the Journal Impact Factor for judging articles, and thus the researchers who produce them. However, it conflates two problems with the JIF. First is the statistical problem of imputing status from an aggregate to an individual, when the aggregate measure fails to capture variation that is very wide relative to the difference between groups. Second, more specific to JIF, is the reliance on a very time-specific comparison: citations in year three to publications in years one and two. Someone could do (maybe already has) an analysis to determine the best lag structure for JIF to maximize its predictive power, but the conclusions from the first problem imply that’s a fool’s errand.

Anyway, in my sample the second problem is clearly relevant. My analysis relies strictly on the rank-ordering provided by the JIF to determine whether article comparisons succeed or fail. However, the sample I drew covers four years, 2016-2019, and counts citations to all of them through 2020. This difference in time window produces a rank ordering that differs substantially (the rank order correlation is .73), as you can see:

In particular, three journals (BJS, SOR, and SFO) moved more than five spots in the ranking. A glance at the results table above shows that these journals are dragging down the matching success rate. To pull these two problems apart, I repeated the analysis using the ranking produced within the sample itself.

The results are now much more straightforward. First, here is the same box plot but with the new ordering. Now you can see the ranking more clearly, though you still have to squint a little.

And in the match rate analysis, the result is now driven by differences in means and variances rather than by the mismatch between JIF and sample-mean rankings (click to enlarge):

This makes a more logical pattern. The most differentiated journal, ASR, has the highest success rate, and the journals closest together in the ranking fail the most. However, please don’t take from this that such a ranking becomes a legitimate way to judge articles. The overall average on this table is still only 58%, up only 4 points from the original table. Even with a ranking that more closely conforms to the sample, this confirms Brito and Rodríguez-Navarro’s conclusion: “[when rankings] of the journals are not so different … the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.”

These match numbers are too low to responsibly use in such a way. These major sociology journals have citation rates that are too variable, and too similar at the mean, to be useful as a way to judge articles. ASR stands apart, but only because of the rest of the field. Even judging an ASR paper against its lower-ranked competitors produces a successful one-to-one ranking of papers just 72% of the time — and that only rises to 82% with the least-cited journal on the list.

The supplemental analysis is helpful for differentiating the multiple problems with JIF, but it does nothing to solve the problem of using journal citation rates to evaluate individual articles.


*The data and Stata code I used is up here: osf.io/zutws. This includes the lists of all articles in the 15 journals from 2016 to 2020 and their citation counts as of the other day (I excluded 2020 papers from the analysis, but they’re in the lists). I forgot to save the version of the 100k-case random file that I used to do this, so I guess that can never be perfectly replicated; but you can probably do it better anyway.

Rural COVID-19 paper peer reviewed. OK?

Twelve days ago I posted my paper on the COVID-19 epidemic in rural US counties. I put it on the blog, and on the SocArXiv paper server. At this writing the blog post has been shared on Facebook 69 times, the paper has been downloaded 149 times, and tweeted about by a handful of people. No one has told me it’s wrong yet, but not one has formally endorsed it yet, either.

Until now, that is. The paper, which I then submitted to the European Journal of Environment and Public Health, has now been peer reviewed and accepted. I’ve updated the SocArXiv version to the journal page proofs. Satisfied?

It’s a good question. We’ll come back to it.

Preprints

The other day (I think, not good at counting days anymore) a group of scholars published — or should I say posted — a paper titled, “Preprinting a pandemic: the role of preprints in the COVID-19 pandemic,” which reported that there have already been 16,000 scientific articles published about COVID-19, of which 6,000 were posted on preprint servers. That is, they weren’t peer-reviewed before being shared with the research community and the public. Some of these preprints are great and important, some are wrong and terrible, some are pretty rough, and some just aren’t important. This figure from the paper shows the preprint explosion:

F1.large

All this rapid scientific response to a worldwide crisis is extremely heartening. You can see the little sliver that SocArXiv (which I direct) represents in all that — about 100 papers so far (this link takes you to a search for the covid-19 tag), on subjects ranging from political attitudes to mortality rates to traffic patterns, from many countries around the world. I’m thrilled to be contributing to that, and really enjoy my shifts on the moderation desk these days.

On the other hand some bad papers have gotten out there. Most notoriously, an erroneous paper comparing COVID-19 to HIV stoked conspiracy theories that the virus was deliberately created by evil scientists. It was quickly “withdrawn,” meaning no longer endorsed by the authors, but it remains available to read. More subtly, a study (by more famous researchers) done in Santa Clara County, California, claimed to find a very high rate of infection in the general population, implying COVID-19 has a very low death rate (good news!), but it was riddled with design and execution errors (oh well), and accusations of bias and corruption. And some others.

Less remarked upon has been the widespread reporting by major news organizations on preprints that aren’t as controversial but have become part of the knowledge base of the crisis. For example, the New York Times ran a report on this preprint on page 1, under the headline, “Lockdown Delays Cost at Least 36,000 Lives, Data Show” (which looks reasonable in my opinion, although the interpretation is debatable), and the Washington Post led with, “U.S. Deaths Soared in Early Weeks of Pandemic, Far Exceeding Number Attributed to Covid-19,” based on this preprint. These media organizations offer a kind of endorsement, too. How could you not find this credible?

postpreprint

Peer review

To help sort out the veracity or truthiness of rapid publications, the administrators of the bioRxiv and medRxiv preprint servers (who are working together) have added this disclaimer in red to the top of their pages:

Caution: Preprints are preliminary reports of work that have not been certified by peer review. They should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

That’s reasonable. You don’t want people jumping the gun on clinical decisions, or news reports. Unless they should, of course. And, on the other hand, lots of peer reviewed research is wrong, too. I’m not compiling examples of this, but you can always consult the Retraction Watch database, which, for example, lists 130 papers published in Elsevier journals in 2019 that have been retracted for reasons ranging from plagiarism to “fake peer review” to forged authorship to simple errors. The database lists a few peer-reviewed COVID-19 papers that have already been retracted as well.

This comparison suggests that the standard of truthiness cannot be down to the simple dichotomy of peer reviewed or not. We need signals, but they don’t have to be that crude. In real life, we use a variety of signals for credibility that help determine how much to trust a piece of research. These include:

  • The reputation of the authors (their degrees, awards, twitter following, media presence)
  • The institutions that employ them (everyone loves to refer to these when they are fancy universities reporting results they favor, e.g., “the Columbia study showed…”)
  • Who published it (a journal, an association, a book publisher), which implies a whole secondary layer of endorsements (e.g., the editor of the journal, the assumed expertise of the reviewers, the prestige or impact factor of the journal as a whole, etc.)
  • Perceived conflicts of interest among the authors or publishers
  • The transparency of the research (e.g., are the data and materials available for inspection and replication)
  • Informal endorsements, from, e.g., people we respect on social media, or people using the Plaudit button (which is great and you should definitely use if you’re a researcher)
  • And finally, of course, our own assessment of the quality of the work, if it’s something we believe ourselves qualified to assess

As with the debate over the SAT/GRE for admissions, the quiet indicators sometimes do a lot of the work. Call something a “Harvard study” or a “New York Times report,” and people don’t often pry into the details of the peer review process.

Analogy: People who want to eat only kosher food need something to go on in daily life, and so they have erected a set of institutional devices that deliver such a seal (in fact, there are competing seal brands, but they all offer the same service: a yes/no endorsement by an organization one decides to trust). The seals cost money, which is added to the cost of the food; if people like it, they’re willing to pay. But, as God would presumably tell you, the seal should not always substitute for your own good judgment because even rabbis or honest food producers can make mistakes. And in the absence of a good kosher inspection to rely on altogether, you still have to eat — you just have to reason things through to the best of your ability. (In a pinch, maybe follow the guy with the big hat and see what he eats.) Finally, crucially for the analogy, anyone who tells you to ignore the evidence before you and always trust the authority that’s selling the dichotomous indicator is probably serving their own interests as least as much as they’re serving yours.

In the case of peer review, giant corporations, major institutions, and millions of careers depend on people believing that peer review is what you need to decide what to trust. And they also happen to be selling peer review services.

My COVID-19 paper

So should you trust my paper? Looking back at our list, you can see that I have degrees and some minor awards, some previous publications, some twitter followers, and some journalists who trust me. I work at a public research university that has its own reputation to protect. I have no apparent way of profiting from you believing one thing or another about COVID-19 in rural areas (I declared no conflicts of interest on the SocArXiv submission form). I made my data and code available (even if no one checks it, the fact that it’s there should increase your confidence). And of course you can read it.

And then I submitted it to the European Journal of Environment and Public Health, which, after peer review, endorsed its quality and agreed to publish it. The journal is published by Veritas Publications in the UK with the support of Tsinghua University in China. It’s an open access journal that has been publishing for only three years. It’s not indexed by Web of Science or listed in the Directory of Open Access Journals. It is, in short, a low-status journal. On the plus side, it has an editorial board of real researchers, albeit mostly at lower status institutions. It publishes real papers, and (at least for now) it doesn’t charge authors any  publication fee, it does a little peer review, and it is fast. My paper was accepted in four days with essentially no revisions, after one reviewer read it (based on the summary, I believe they did read it). It’s open access, and I kept my copyright. I chose it partly because one of the papers I found on Google Scholar during my literature search was published there and it seemed OK.

So, now it’s peer reviewed.

Here’s a lesson: when you set a dichotomous standard like peer-reviewed yes/no and tell the public to trust it, you create the incentive for people to do the least they can to just barely get over that bar. This is why we have a giant industry of tens of thousands of academic journals producing products all branded as peer reviewed. Half a century ago, some academics declared themselves the gatekeepers of quality, and called their system peer review. To protect the authority of their expertise (and probably because they believed they knew best), they insisted it was the standard that mattered. But they couldn’t prevent other people from doing it, too. And so we have a constant struggle over what gets to be counted, and an effort to disqualify some journals with labels like “predatory,” even though it’s the billion-dollar corporations at the top of this system that are preying on us the most (along with lots of smaller scam purveyors).

In the case of my paper, I wouldn’t tell you to trust it much more because it’s in EJEPH, although I don’t think the journal is a scam. It’s just one indicator. But I can say it’s peer reviewed now and you can’t stop me.

Aside on service and reciprocity: Immediately after I submitted my paper, the EJEPH editors sent me a paper to review, which I respect. I declined because it wasn’t qualified, and then they sent me another. This assignment I accepted. The paper was definitely outside my areas of expertise, but it was a small study quite transparently done, in Nigeria. I was able to verify important details — like the relevance of the question asked (from cited literature), the nature of the study site (from Google maps and directories), the standards of measurement used (from other studies), the type of the instruments used (widely available), and the statistical analysis. I suggested some improvements to the contextualization of the write-up and recommended publication. I see no reason why this paper shouldn’t be published with the peer review seal of approval. If it turns out to be important, great. If not, fine. Like my paper, honestly. I have to say, it was a refreshing peer review experience on both ends.

Tone policing: Am I allowed to put Regnerus, Wilcox, and Hitler in the same headline?

3147786573_64841041cc_b
Sir, are you aware you were using a caustic tone back there? (photo: Thomas Hawk)

Nicholas Wolfinger reviewed my book Enduring Bonds for Social Forces (paywalled [why paywall book reviews?]; bootlegged). It would be unseemly of me to argue with a two-page book review instead of letting my life’s work stand on its own, so here goes — but just on one point: tone policing.

This is the opening of the review:

Philip Cohen has a lot of beefs. Hanna Rosen is an ”antifeminist” (p. 134) prone to “errors and distortions” (p. 146), and a “record of misstating facts in the service of inaccurate conclusions” (p. 185); W. Bradford Wilcox offers an “interpretation not just wrong but the opposite of right” (p. 76) and elsewhere gives a “racist” interview (p. 175); Ron Haskins, a “curmudgeon” (p. 175), presents a meme that’s “stupid and evil” (p. 47); David Blankenhorn is the author of a “deeply ridiculous” article (p. 80); Christina Hoff Sommers speaks in “[a] voice [that] drips with contempt” (p. 200) and is deemed to be an “antifeminist” (p. 155), even though she’s later identified as a
feminist (p. 197).*

He adds:

Also making the list: Paula England, for her “disappointingly mild” review of Cohen’s Public Enemy Number One, the “obtuse, semi-coherent” (p. 106) and “simply unethical” (p. 91) Mark Regnerus. Indeed, 29 of the 209 pages of Cohen’s book are spent excoriating Regnerus for two different studies.

This makes up his argument that, “Cohen writes so tendentiously that the useful bits get carried away in a torrent of ad hominem asperity,” and his conclusion, “you catch more flies with honey than with vinegar.”

Over my many years as a caustic person, I have heard this a lot, mostly from academics, bless their hearts. Which is cool, that’s my career choice and it would be unseemly to complain about it now, so here goes.

Listing the bad words I used doesn’t mean anything. And telling me I spent 29 pages on Regnerus (Wolfinger doesn’t mention that his frequent co-author, Brad Wilcox, is featured heavily in those 29 pages, or even that Wilcox is his frequent co-author), is not a meaningful critique unless you explain why these people don’t deserve it. I’ve heard, for example, that people have written very good whole books about specific individuals and the bad things they’ve done — including, off the top of my head, Hitler. The meaningful question is, am I wrong in those assessments, and if I am, why? In other words, you catch more flies by telling the reader why it would not be unacceptably harsg to write a whole book about Hitler but the same cannot be said about 29 pages on Regnerus and Wilcox. Or why it’s wrong to criticize Rosin, Haskins, Blankenhorn, Sommers and (lol) England in harsh terms.

If you want to enjoy a world where entire reviews are written about the use of harsh words, reviews that don’t even give a hint — not even a mention — as to the content of the issues and disputes that prompted those harsh words, then I can only suggest a career in academia.

Ironic aside

I tweeted a link to Wolfinger’s review, even though it is completely negative, because I’m scrupulous and fair-minded.

nwtweet

This led him to go on a multitweet journey, complaining that “he took words like ‘formidable’ out of context to suggest a much more positive review,” and exploring my motivations — responding to someone who said, “That was clearly a joke” with, “You see a joke, I see mendacity,” and concluding, “‘‘Just a joke’ is a weak, all-purpose way to cover up a fuck up like getting caught twisting the evidence.”

I hate to bring up Hitler again (not really), but the last time someone spent so much time pretending to not understand I was joking it was actual nazis, quoting a tweet where I joked that Jews were devoted to “eradicating whiteness and undermining its civilizations” (not linking, but you can google it). This led to a lot of online grief and some death threats, including posting my address on Reddit. So it irritated me.

The online nazi mob technique is to pretend things Jews say aren’t jokes, then pretend they themselves are joking when they talk about genocide. I’m sure many Jewish readers will recognize that failure to understand sarcastic humor is actually a common trait among rank-and-file anti-Semites — the people who have a hard time differentiating “New York” from “Jewish” — something that leading anti-Semites are very adept at manipulating. So that resonated with me.

(The above is labeled “aside” to make it boringly over-clear that I’m not saying Wolfinger is anti-Semitic.)


* Correction: Sommers is not “identified as a feminist” on p. 197, I just reported the name of her video series which is, absurdly, the The Factual Feminist.

 

Wilcox plagiarism denial and ethics review

Recently I made the serious accusation that Brad Wilcox and his colleagues plagiarized me in a New York Times op-ed. After the blog post, I sent a letter to the Times and got no response. And until now Wilcox had not responded. But now thanks to an errant group email I had the chance to poke him, and he responded, in relevant part:

You missed the point of the NYT op-ed, which was to stress the intriguing J-Curve in women’s marital happiness when you look at religion and gender ideology. We also thought it interesting to note there is a rather similar J-Curve in women’s marital happiness in the GSS when it comes to political ideology, although the political ideology story was somewhat closer to a U-Curve in the GSS. Our NYT argument was not inspired by you, and our extension of the argument to a widely used dataset is not plagiarism.

Most of that comment is irrelevant to the question of whether the figure they published was ripped off from my blog; the only argument he makes is to underline the word notTo help readers judge for themselves, here is the sequence again, maybe presented more clearly than I did it last time.

Wilcox and Nicholas Wolfinger published this, claiming Republicans have happier marriages:

marital-quality-fig-1

I responded by showing that that when you break out the categories more you get a U-shape instead:

marital-happiness-partyid.xlsx

Subsequently, I repeated the analysis, with newer data, using political views instead of party identification (the U-shape on the right):

hapmar16c

This is the scheme, and almost exactly the results, that Wilcox and colleagues then published in the NYT, now including one more year of data:

bwnyt

The data used, the control variables, and the results, are almost identical to analysis I did in response to their work. His response is, “Our NYT argument was not inspired by you.” So that’s that.

Ethics aside

Of course, only he knows what’s in his heart. But the premise of his plagiarism denial is an appeal to trust. So, do you trust him?

Lies

There is a long history here, and it’s hard to know where to start if you’re just joining. Wilcox has been a liberal villain since he took over the National Marriage Project and then organized what became (unfortunately) known as the Regnerus study (see below), and a conservative darling since the top administration at the University of Virginia overturned the recommendation of his department and dean to grant him tenure.

So here are some highlights, setting aside questions of research quality and sticking to ethical issues.

Wilcox led the coalition that raised $785,000, from several foundations, used to generate the paper published under Mark Regnerus’s name, intended to sway the courts against marriage equality. He helped design the study, and led the development of the media plan, and arranged for the paper to be submitted to Social Science Research, and then arranged for himself to be one of the anonymous peer reviewers. To do this, he lied to the editor, by omission, about his contribution the study — saying only that he “served on the advisory board.”

And then when the scandal blew up he lied about his role at the Witherspoon Institute, which provided most of the funding, saying he “never served as an officer or a staffer at the Witherspoon Institute, and I never had the authority to make funding or programmatic decisions at the Institute,” and that he was “not acting in an official Witherspoon capacity.” He was in fact the director of the institute’s Program on Family, Marriage, and Democracy, which funded the study, and the email record showed him approving budget requests and plans. To protect his reputation and cover up the lie, that position (which he described as “honorific”) has been scrubbed from his CV and the Witherspoon website. (In the emails uncovered later, the president of Witherspoon, Luis Tellez wrote, “we will include some money for you [Regnerus] and Brad on account of the time and effort you will be devoting to this,” but the amount he may have received has not been revealed — the grants aren’t on his CV.)

This is covered under the Regnerus and Wilcox tags on the blog, and told in gripping fashion in a chapter of my book, Enduring Bonds.

You might hold it against him that he organized a conspiracy to fight marriage equality, but even if you think that’s just partisan nitpickery, the fact that the research was the result of a “coalition” (their word) that included a network of right-wing activists, and that their roles were not disclosed in the publication, is facially an ethical violation. And the fact that it involved a series of public and private lies, which he has never acknowledged, goes to the issue of trust in every subsequent case.

Money

Here I can’t say what ethical rule Wilcox may have broken. Academia is a game that runs on trust, and in his financial dealings Wilcox has not been forthcoming. There is money flowing through his work, but the source and purpose that money is not disclosed when the work is published. For example, in the NYT piece Wilcox is identified only as a professor at the University of Virginia, even though the research reported there was published by the Institute for Family Studies. His faculty position, and tenure, are signals of his trustworthiness, which he uses to bolster the reputation of his partisan efforts.

The Institute for Family Studies is a non-profit organization that Wilcox created in 2009, originally called the Ridge Foundation. For the first four years the tax filings list him as the president, then director. Since 2013, when it changed its name to IFS, he has been listed as a senior fellow. Through 2017, the organization paid him more than $330,000, and he was the highest paid person. The funders are right-wing foundations.

Most academics want people to know about their grants and the support for their research. On his CV at the University of Virginia, however, Wilcox does not list the Institute for Family Studies in the “Employment” section, or include it among the grants he has received. Even though it is an organization he created and built up, so far grossing almost $3 million in total revenue. It is only mentioned in a section titled “Education Honors and Awards,” where he lists himself as a “Senior Fellow, Institute for Family Studies.” An education honor and award he gave himself, apparently.

He also doesn’t list his position on the Marco Rubio campaign’s Marriage & Family Advisory Board, where he was among those who “understand” that “Windsor and Obergefell are only the most recent example of our failure as a society to understand what marriage is and why it matters”

Wilcox uses his academic position to support and legitimize his partisan efforts, and his partisan work to produce work under his academic title (of course IFS says it’s nonpartisan but that’s meaningless). If he kept them really separate that would be one thing — we don’t need to know what church academics belong to or what campaigns they support, except as required by law — but if he’s going to blend them together I think he incurs an ethical disclosure obligation.

Wilcox isn’t the only person to scrub Withserspoon from his academic record — which is funny because the Witherspoon Institute is housed at Princeton University (where Wilcox got his PhD). And the fact of removing Witherspoon from a CV was used to discredit a different anti-marriage-equality academic expert, Joseph Price at Brigham Young, in the Michigan trial that led to the Obergefell decision, because it made it seem he was trying to hide his political motivations in testifying against marriage equality. Here is the exchange:

price-lie

Court proceedings are useful for bringing out certain principles. In this case I think they help illustrate my point: If Brad Wilcox wants people to trust his motivations, he should disclose the sources of support for his work.

Naomi Wolf and sharing our lanes

Bruce Stokes / https://flic.kr/p/dMG983

The other day, in response to the Naomi Wolf situation, I tweeted in response to Heather Souvaine Horn, an editor at the New Republic:

After which she invited my to submit an essay to the site. It’s now been published as: Learn the Right Lessons from Naomi Wolf’s Book Blunder: Expertise matters. But lane-policing is counterproductive.

I spent my semester as an MIT / CREOS Visiting Scholar and it was excellent

PNC in Cambridge in the fall.
Cambridge in the fall.

As a faculty sociologist who works in the area of family demography and inequality, my interest in open scholarship falls into the category of “service” among my academic obligations, essentially unrecognized and unremunerated by my employer, and competing with research and teaching responsibilities for my time. In that capacity I founded SocArXiv in 2016 (supported by several small grants) and serve as its director, organized two conferences at the University of Maryland under the title O3S: Open Scholarship for the Social Sciences, and I was elected to the Committee on Publications of the American Sociological Association. While continuing that work during a sabbatical leave, I was extremely fortunate to land a half-time position as visiting scholar at the MIT Libraries in the fall 2018, which helped me integrate that service agenda with an emerging research agenda around scholarly communication.

The position was sponsored by a group of libraries organized by the Association of Research Libraries — MIT, UCLA, the University of Arizona, Ohio State University, and the University of Pittsburgh — and hosted by the new Center for Research on Equitable and Open Scholarship (CREOS) at MIT. My principal collaborator has been Micah Altman, the director of research at CREOS.

The semester was framed by the MIT Grand Challenges Summit in the spring, which I attended, and the report that emerged from that meeting: A Grand Challenges-Based Research Agenda for Scholarly Communication and Information Science, on which I was a collaborator. The report, published in December, describes a vision for a more inclusive, open, equitable, and sustainable future for scholarship; it also characterizes the barriers to this future, and identifies the research needed to bring it to fruition.

Sociology and SocArXiv

Furthering my commitments to sociology and SocArXiv, I continued to work on the service. SocArXiv is growing, with increased participation in sociology and other social sciences. In the fall the Center for Open Science, our host, opened discussions with its paper serving communities about weaning the system off its core foundation financial support and using contributions from each service to make it sustainable (thus far have not paid COS for its develop and hosting). This was an expected challenge, which will require some creative and difficult work in the coming months.

Finally, at the start of the semester I noted that most sociologists — even those interested in open access issues — were not familiar with current patterns, trends, and debates in the scholarly communications ecosystem. This has hampered our efforts to build SocArXiv, as well as our ability to press our associations and institutions for policy changes in the direction of openness, equity, and sustainability. In response to this need, especially among graduate students and junior scholars, I drafted a scholarly communication primer for sociology, which reviews major scholarly communication media, policies, economic actors, and recent innovations. I posted a long draft (~13,000 words) for comment in January, and received a very positive response. It appears that a number of programs will incorporate the revised primer into their training, and many individuals are already reading and sharing it with their networks.

Peer review

One of the chief barriers identified in the Grand Challenges report is the lack of systematic theory and empirical evidence to design and guide legal, economic, policy and organizational interventions in scholarly publishing and in the knowledge ecosystem generally. As social scientists, Micah and I drew on this insight, and used the case of peer-review in sociology as an entry point. We presented our formative analysis of this case in the CREOS Research Talk, “Can Fix Peer Review.” Here is the summary of this talk:

Contemporary journal peer review is beset by a range of problems. These include (a) long delay times to publication, during which time research is inaccessible; (b) weak incentives to conduct reviews, resulting in high refusal rates as the pace of journal publication increases; (c) quality control problems that produce both errors of commission (accepting erroneous work) and omission (passing over important work, especially null findings); (d) unknown levels of bias, affecting both who is asked to perform peer review and how reviewers treat authors, and; (e) opacity in the process that impedes error correction and more systematic learning, and enables conflicts of interest to pass undetected. Proposed alternative practices attempt to address these concerns — especially open peer review, and post-publication peer review. However, systemic solutions will require revisiting the functions of peer review in its institutional context.

The full slides, with embedded video of the talk (minus the first few minutes) is embedded below:

Research design and intervention

Mapping out the various interventions and proposed alternatives in the peer review space raised a number of questions about how to design and evaluate interventions in a complex system with interdependent parts and actors embedded in different institutional logics — for example, university researchers (some working under state policy), research libraries, for-profit publishers, and academic societies. Working with Jessica Polka, Director of ASAPbio, we are expanding this analysis to consider a range of innovations open science. This analysis highlights the need for systematic research design that can guide the design of initiatives aimed at altering the scholarly knowledge ecosystem.

Applying the ecosystem approach in the Grand Challenges report, we consider large-scale interventions in public health and safety, and their unintended consequences, to build a model for designing projects with the intention of identifying and assessing such consequences across the system. Addressing problems at scale may have such unintended effects as leading vulnerable populations to adapt to new technology in harmful ways (mosquito nets used for fishing); providing new opportunities for harmful competitors (the pesticide treadmill); the displacement of private actors by public goods (dentists driven away by public water fluoridation); and risk compensation by those who receive public protection (anti-lock brakes and riskier driving, vaccinations). Our forthcoming white paper will address such risks in light of recent open science interventions: PLOS One, bioRxiv and preprints generally, and open peer review, among others. We combine research design methods for field experiments in social science, outcomes identified in the grand challenge report, and the ecosystem theory based on an open science lifecycle model.

ARL/SSRC meeting and Next Steps

Coming out of discussions at the first O3S meeting, in December the Association of Research Libraries and the Social Science Research Council convened a meeting on open scholarship in the social sciences, which included leaders from scholarly societies, university libraries, researchers advocating for open science, funders, and staff from ARL, SSRC, and the Coalition for Networked Information. I was fortunate to participate on the planning committee for the meeting, and in that capacity I conducted a series of short video interviews with individual stakeholders from the participating organizations to help expose us all to the range of values, objectives, and concerns we bring to the questions we collectively face in the movement toward open scholarship.

For our own work on peer review, which we presented at the meeting, I was especially drawn to the interviewees’ comments on transparency, incentives, and open infrastructure. In particular, MIT Libraries Director Chris Bourg challenged social scientists to recognize what their own research implies for the peer review system:

Brian Nosek, director of the Center for Open Science, stressed to the need to consider incentives for openness in our interventions:

And Kathleen Fitzpatrick, project director for Humanities Commons, described the necessity of open infrastructure that is flexibly interoperable, allowing parallel use by actors on diverse platforms:

These insights about intervention principles for an open scholarly ecosystem helped Micah and me develop a proposal for discussion at the meeting. Our proposed program, IOTA (I Owe The Academy) aims to solve the supply-and-demand problem for quality peer review in open science interventions (the name is likely to change). We understand that most academics are willing to do peer review when it contributes to a better system of scholarship. At the same time, new peer review projects need (good) reviewers in order to launch successfully. And the community needs (good) empirical research on the peer review process itself. The solution is to match reviewers with initiatives that promote better scholarship using a virtual token system, whereby reviewers pledge review effort units, which are distributed to open peer review projects — while collecting data for use in evaluation and assessment. After receiving positive feedback at the meeting, we will develop this proposal further.

Our presentation is embedded in full below:

A report on the ARL/SSRC meeting describes the shared interests, challenges to openness, and conditions for successful action discussed by participants. And it includes five specific projects they agreed to pursue — one of which is peer review on the SocArXiv and PsyArXiv paper platforms.

What’s next…

In the coming several months we expect to produce a white paper on research design, a proposal for IOTA, and a presentation for the Coalition for Networked Information meeting in April, to spark a discussion about the ways libraries can jointly support additional targeted work to promote, inspire, and support evidence-based research. And a revised version of the scholarly communication primer for sociology is on the way.