Sociologists: Don’t embargo your dissertation

Your work is important. Don’t hide it. (PNC video)

This post is about the practice of putting your dissertation under an embargo, which means your university library, and probably its agent, ProQuest, don’t let people read it for a certain amount of time, sometimes only a few months, sometimes many years. At my school, the University of Maryland, the graduate school is implementing a new policy that allows two-year embargoes without special permission (down from six years), and longer embargoes only with permission of the advisor and the dean.

Are you in this to advance knowledge? If so, don’t embargo your dissertation. By definition, a dissertation is a contribution to knowledge. By definition, keeping people from reading it stops that from occurring.

Many PhD graduates embargo their dissertations because it feels like the safer thing to do, because they’re vaguely worried about sharing their work, either because it’s so good someone will steal it, or it’s so bad it will embarrass them — or, weirdly, both. Many people don’t seriously think about it, don’t read up on the question, don’t discuss it with knowledgeable mentors (which your PhD advisor is very likely not, at least when it comes to this question). Lots of good people make this mistake, and that’s a shame. I’m writing this post so that, if you see it before you face this choice, there’s a chance my nagging voice will get stuck in your head.

Some graduate students think they’re being exploited and someone is going to make money off their work. Probably not. (You may have been exploited as a graduate student, and you might have good reasons for disliking your university, but this isn’t about making your university happy.) Maybe your dissertation will lead to an important book that lots of people will read — that is wonderful, and I hope it does. Of course, that’s a very small minority of dissertations, even among really good ones that make important contributions to knowledge. That’s just not in the cards in the vast majority of cases. But unless you already have a contract and a publisher telling you that without an embargo the deal is off — a situation that is vanishingly rare if it occurs at all, at least in sociology — making your dissertation publicly available will not hurt (and will probably help) your chances of accomplishing that goal. And if you’re going to publish articles based on your dissertation, no reputable journal will turn them away because they have overlapping content with your dissertation.

Some graduate students are afraid they will get “scooped” or their ideas will be “stolen.” This is profoundly misguided. You are doing the work so that people will read it. People are going to do what they do. You might be taking a small risk to your personal interest by making your work public, but consider it against the benefit of people reading it (which is, after all, the reason you should have written it). This is your finished work. It’s done. By definition it can’t be scooped. It can be plagiarized, like anything else. Would it be awkward or disappointing if someone published something similar that made similar contributions? Maybe. Will that substantially harm your career or personal interests? Very unlikely.* If you had a good idea, it will probably lead to more. Your ideas and your efforts in the dissertation are on the record now. Be proud of them, take credit for them, encourage people to engage with them, and hope that they will be inspired to do work that follows your lead. If your dissertation is good, it’s worth the risk — because you want people to read it. If your dissertation is bad, there is no risk anyway.

Will making your dissertation public hurt your chances of publishing a book? Almost certainly not. As an editor at Harvard University Press wrote:

“Generally speaking, when we at HUP take on a young scholar’s first book, whether in history or other disciplines, we expect that the final product will be so broadened, deepened, reconsidered, and restructured that the availability of the dissertation is irrelevant.”

And they quoted an assistant editor who went further: making your dissertation available improves your chances of getting a book contract:

“I’m always looking out for exciting new scholarship that might make for a good book, whether in formally published journal articles and conference programs, or in the conversation on Twitter and in the history blogosphere, or in conversations with scholars I meet. And so, to whatever extent open access to a dissertation increases the odds of its ideas being read and discussed more widely, I tend to think it increases the odds of my hearing about them.”

Or, as the editorial director at Columbia University Press, Eric Schwartz wrote in a tweet about sharing dissertations: “No problem. Book and dissertation are for different audiences.”

Of course there may be exceptions. If you have an editor on the hook who insists on an embargo, consider the pros and cons. If you have only a vague hope of publishing it down the road, don’t bother.

Do you want to win awards so everyone is talking about your dissertation? Don’t embargo it. Thanks to a 2015 change in policy at the American Sociological Association:

“To be eligible for the ASA Dissertation Award, nominees’ dissertations must be publicly available in Dissertation Abstracts International or a comparable outlet. Dissertations that are not available in this fashion will not be considered for the award.”

There are real, important principles at stake. Hate on your universities all you want, but some of their lofty rhetoric is true and good — and we should be holding them to it, not scoffing at it. Many universities, like the University of California system, have policies based on such high-minded statements as this:

“The University of California is committed to disseminating research and scholarship conducted at the University as widely as possible…. The University affirms the long-standing tradition that theses and dissertations, which represent significant contributions to the advancement of knowledge and the scholarly record, should be shared with scholars in all disciplines and the general public.”

Embargoing the work for years absolutely violates the spirit of such a principled policy, even if they do allow an embargo. Making your work accessible years later is clearly depriving the public of “significant contributions to the advancement of knowledge and the scholarly record” for the most important period in the life of the work — the years right after it’s done.

Here’s the statement from the University of Chicago:

“The public sharing of original dissertation research is a principle to which the University is deeply committed, and dissertations should be made available to the scholarly community at the University of Chicago and elsewhere in a timely manner. If dissertation authors are concerned that making their research publicly available might endanger research subjects or themselves, jeopardize a pending patent, complicate publication of a revised dissertation, or otherwise be unadvisable, they may, in consultation with faculty in their field (and as appropriate, research collaborators), restrict access to their dissertation for a limited period of time.”

Some people might skim through this policy and say, “Oh, cool, they allow an embargo,” and just check the box requesting it. But that’s making a powerful statement against the important principle articulated in this policy. If you don’t have a really good reason to embargo your dissertation — and you almost certainly don’t — the public interest demands that you make it public. Take the value of your work seriously. Not it’s commercial value, it’s actual value — which is to people who want to read it.

There is also an important accountability principle at stake. Should PhDs be awarded in secret, with no accountability beyond the committee room walls, until years later? For those of us on the faculty, how are we to evaluate programs and their candidates if we can’t scrutinize their most important works? How can we claim to be reputable programs if we shroud our work behind embargoes. Without at least this bottom-line transparency, there can be little accountability.

I write this post out of a certain sense of shame. I’m the director of graduate studies in our department, and I haven’t made it a priority to talk to students about this, because I didn’t know it was happening. When I looked at the dissertations from our department, which are archived in the Digital Repository at the University of Maryland (or, if they are embargoed, merely listed), I saw that among the last 19 dissertations, 12 were currently embargoed. The seven that were made public have been downloaded 1,200 times.

If you want to embargo your dissertation, or if someone is telling you that you should, the burden is on you (or them) to prove that the real benefits of the embargo — not just for you, but for the contribution to knowledge that your work represents — are greater than the harm of denying readers access to your research. The default must be to share our dissertations, with rare exceptions only when real (not imagined or rumored) circumstances demand that the public interest in access to knowledge be sacrificed.


* My dissertation, completed in 1999, although excellent, was not especially original. My major contributions were updating research on a longstanding theory to (a) use more recent data, (b) include women, and (c) use hierarchical linear models. My dissertation was titled, “Black Population Size and the Structure of United States Labor Market Inequality.” In 1997, as I was hard at work, and had a chapter under review at Social Forces (which I had already presented at two conferences), an article appeared (in Social Forces!) titled, “Black Population Concentration and Black-White Inequality: Expanding the Consideration of Place and Space Effects.” The authors used (a) the new data I was using, they (b) included women, and their (c) models were fancier than mine. I was crushed. And then, with my advisor’s help, I got over it. My article (with a citation to theirs added) got published the next year anyway, titled, “Black Concentration Effects on Black-White and Gender Inequality: Multilevel Analysis for U.S. Metropolitan Areas.” People read both articles. And then I went on to do a bunch more work in that area, with great collaborators, building up a body of research that drew from my dissertation but went much further in terms of theory, methods, and data. My article got cited plenty, partly because it was part of a group of articles that traveled together. I was “scooped,” but they didn’t get their ideas from sneaking a look at my brilliant work in progress, they were logical next steps in a 40-year trajectory of research on an established set of questions. Their publication strengthened the field in which I was working. (In fact, if they had stolen my ideas their paper would have been worse for them, and less damaging to me.)

The American Sociological Association is collapsing and its organization is a perpetual stagnation machine

I am at the end of a three-year term as an elected member of the American Sociological Association (ASA) Committee on Publications, during which time, despite some efforts, I achieved none of the goals in my platform. I would like to say I learned a lot or whatever, but I didn’t. I enjoyed the time spent with colleagues at the meetings (and we did some rubber stamping and selecting editors, which someone had to do), but beyond that it was a waste of time. Here are some reflections.

First, some observations about sociology as a discipline, then about the ASA generally, and then about the situation with the Committee on Publications.

Sociology

Sociology has occupied a rapidly declining presence in US higher education for two decades. The percentage of bachelor’s, masters, and PhD degrees that were awarded in sociology peaked at the end of the last century:

soc degree share

Looking just at the number of PhDs awarded, you can see that among the social sciences, since the 2009 recession (scaled to 0 on this figure), sociology is one of the social sciences disciplines that has slipped, while psychology, economics, business, and political science have grown (along with STEM disciplines).

phds relative to 2009

The American Sociological Association

So, sociology as an academic discipline is in decline. But how is ASA doing — is it fair for me to say “collapsing” in the title of this post? The shrinking of the discipline puts a structural squeeze on the association. In order to maintain its organizational dimensions it would need a growing share of the sociology milieu. The prospects for that seem dim.

On the plus side, the association publishes several prominent journals, which were the ostensible subject of our work on the publications committee. Metrics differ, but two ASA journals are in the top ten sociology journals by five-year citation impact factor in Web of Science (American Sociological Review and Sociology of Education [the list is here]). In the Google Scholar ranking of journals by h5-index, which uses different subject criteria, only one (ASR) is in the top 20 of sociology journals, ranking 20th out of the combined top 100 from economics, social science general, sociology, anthropology, and political science  (the list is here). In terms of high-impact research, among the top 100 most cited Web of Science sociology papers published in 2017 (an arbitrarily chosen recent year), seven were published in ASA journals (five in American Sociological Review [the list is here]). The 2020 Almetric Top 100 papers, those gaining the most attention in the year (from sources including news media and social media), includes 35 from humanities and social sciences, none of which were published by ASA (although several are by sociologists). So ASA is prominent but not close to dominant within sociology, which is similarly situated within the social sciences. In terms of publications, you can’t say ASA is “collapsing.” (Plus, in 2019 ASA reported $3 million in revenue from publications, 43 percent of its total non-investment income.)

But in terms of membership, the association is leading the way in the discipline’s decline. The number of members in the association fell 24 percent from 2007 to 2019, before nosediving a further 16 percent last year. Relative to the number of PhDs completing their degrees, as one scale indicator, membership has fallen 42 percent since 2007 — from 26 paying members per PhD awarded to 15. Here are the trends:

asa membership

Clearly, the organization is in a serious long-term decline with regard to membership. How will an organization of sociologists, including organizational sociologists, react to such an organizational problem? Task force! A task force on membership was indeed seated in 2017, and two years later issued their report and recommendations. To begin with, the task force reported that ASA’s membership decline is steeper than that seen by 11 other unnamed disciplinary societies:

discsocmem

They further reported that only 36 percent of members surveyed consider the value of belonging to ASA equal to or greater than the cost, while 48 percent said it was overpriced. Further, 69 percent of members who didn’t renew listed cost of membership as an important reason, by far — very far — the most important factor, according to their analysis. Remarkably, given this finding, the report literally doesn’t say what the member dues or meeting registration fees are. Annual dues, incidentally, increased dramatically in 2013, and now range from $51 for unemployed members to $368 for those with incomes over $150,000 per year, apparently averaging $176 per member (based on the number of members and membership revenue declared in the audit reports).

Not surprisingly, then, although they did recommend “a comprehensive review of our membership dues and meeting registration fee structures,” they had no specific recommendations about member costs. Instead they recommended: Create new ways for sociologists to create subgroups within the association, “rethink the Annual Meeting and develop a variety of initiatives, both large and small,” remove institutional affiliations from name badges and make the first names bigger, give a free section membership to new members (~$10 value), anniversary- instead of calendar-based annual pricing, hold the meeting in a wider “variety of cities,” more professional development (mechanism unspecified), more public engagement, change the paper deadline a couple of weeks and consider other changes to paper submission, and provide more opportunities for member feedback. Every recommendation was unanimously approved by the association’s elected council. The following year membership fell another 16 percent, with some unknown portion of the drop attributable to the pandemic and the canceled annual meeting.

With regard to the membership crisis, my assessment is that ASA is a model of organizational stagnation and failure to respond in a manner adequate to the situation. The sociologist members, through their elected council, seem to have no substantial response, which will leave it to the professional staff to implement emergency measures as revenue drops in the coming years. One virtually inevitable outcome is the association further committing to its reliance on paywalled journal publishing and the profit-maximizing contract with Sage, and opposing efforts to open access to research for the public.

Committee on Publications

But it is on the publications committee, and its interactions with the ASA Council, that I have gotten the best view of the association as a perpetual stagnation machine.

I can’t say that the things I tried to do on the publications committee would have had a positive effect on ASA membership, journal rankings, majors, or any other metric of impact for the association. However, I do believe what I proposed would have helped the association take a few small steps in the direction of keeping up with the social science community on issues of research transparency and openness. In November I reported how, more than two years ago now, I proposed that the association adopt the Transparency and Openness Promotion Guidelines from the Center for Open Science, and to start using their Open Science Badges, which recognize authors who provide open data, open materials, or use preregistration for their studies. (In the November post I discussed the challenge of cultural and institutional change on this issue, and why it’s important, so I won’t repeat that here.)

The majority of the committee was not impressed at the beginning. At the January 2019 meeting the committee decided that an “ad hoc committee could be established to evaluate the broader issues related to open data for ASA journals.” Eight months later, after an ad hoc committee report, the publications committee voted to “form an ad hoc committee [a different one this time] to create a statement regarding conditions for sharing data and research materials in a context of ethical and inclusive production of knowledge,” and to, “review the question about sharing data currently asked of all authors submitting manuscripts to incorporate some of the key points of the Committee on Publications discussion.” The following January (2020), the main committee was informed that the ad hoc committee had been formed, but hadn’t had time to do its work. Eight months later, the new ad hoc committee proposed a policy: ask authors who publish in ASA journals to declare whether their data and research materials are publicly available, and if not why not, with the answers to be appended in a footnote to each article. And then the committee approved the proposal.

Foolishly, last fall I wrote, “So, after two years, all articles are going to report whether or not materials are available. Someday. Not bad, for ASA!” Yesterday the committee was notified by ASA staff that, “Council is pleased that Publications Committee has started this important discussion and has asked that the conversation be continued in light of feedback from the Council conversation.” In other words, they rejected the proposal and they’ll tell us why in another four months. There is no way the proposal can take effect for at least another year — or about four years after the less watered-down version was initially proposed, and after my term ends. It’s a perpetual stagnation machine.

Meanwhile, I reviewed 24 consecutive papers in ASR, and found that only four provided access to the code used and at least instructions on how to find the data. Many sociologists think this is normal, but in the world of academic social science, this is not normal, it’s far behind normal.

I don’t know if the Council is paying attention to the Task Force on Membership, but if they were it might have occurred to them that recruiting people to run for office, having the members elect them based on a platform and some expertise, having them spend years on extremely modest, imminently sensible proposals, and then shooting those down with a dismissive “pleased [you have] started this important discussion” — is not how you improve morale among the membership.

Remember that petition?

While I’m at it, I should update you on the petition many of you signed in December 2019, in opposition to the ASA leadership sending a letter to President Trump against a potential federal policy that would make the results of taxpayer-funded research immediately available to the public for free — presumably at some cost to ASA’s paywall revenues. At the January 2020 meeting the publications committee passed two motions:

  1. For the Committee on Publications to express opposition to the decision by the ASA to sign the December 18, 2019 letter.
  2. To encourage Council to discuss implications of the existing embargo and possible changes to the policy and to urge decisionmakers to consult with the scientific community before making executive orders.

We never heard back from the ASA Council, and the staff who opposed the petition were obviously in no rush to follow up on our entreaty to them, so it disappeared. But I just went back to March 2020 Council minutes, and found this profoundly uninformative tidbit:

Council discussed a recent decision of ASA’s authorized leadership to sign a letter expressing concern about an executive order related to scientific publishing rumored to be coming out with almost no notice or consultation with the scientific community. A motion was made by Kelly to affirm ASA’s policy for making time sensitive decisions about public statements and to confirm that the process was properly followed in this instance. Seconded by Misra. Motion carried with 16 for and 2 abstentions.

This doesn’t mention that the substance of the dispute, that the publications committee objected to the leadership’s statement, or the fact that more than 200 people signed a letter that read, in part: “We oppose the decision by ASA to sign this letter, which goes against our values as members of the research community, and urge the association to rescind its endorsement, to join the growing consensus in favor of open access to scholarship, including our own.” To my knowledge no member of the ASA leadership, whether elected sociologists or administrative staff, has responded publicly to this letter. Presumably, the terrible statement sent by the ASA leadership still represents the position of the association — the association that speaks for a rapidly dwindling number of us.

Side note: An amazing and revealing thing happened in the publications committee meeting where we discussed this statement in January 2020. The chair of the committee read a prepared statement, presumably written by the ASA staff, to introduce the voting on my proposal:

The committee has a precedent that many of you are already aware of, of asking people to leave for votes on the proposals they submitted…. This practice is designed to ensure that the committee members can have a full and open discussion. So, Philip, I’d like to ask you to recuse yourself now for the final two items, which you can simply do by hanging up the phone…

Needless to say, I refused to leave the meeting for the discussion on my proposal, as there is no such policy for the committee. (If you know of committee meetings where the person making a proposal — an elected representative — has to leave for the discussion and vote, please let me know.) It was just an attempt to railroad the decision, and other members stepped in to object, so they dropped it. The motion passed, and council ignored it, so seriously who cares, but still. (The minutes for this meeting don’t reflect this whole incident, but I have verbatim notes.) 

You will forgive me if, after this multi-year exercise in futility, I am not inclined to be optimistic regarding the Taskforce on Membership’s Recommendation #10: “Enhance and increase communications from ASA to members and provide opportunities for ASA members to provide ongoing feedback to ASA.” I have one more meeting in my term on the publications committee, but it doesn’t seem likely I’ll be there.

Policy implications are discussed (often to poor effect, in sociology journals)

Commentary, data, suggestions.

Watch where you’re going. (PNC photo: https://flic.kr/p/2gRHfd5.)

The ritualistic invocation of “policy implications” in sociology writing is puzzling. I don’t know its origin, but it appears to have come (like so much else that we cherish because we despise ourselves) from economists. The Quarterly Journal of Economics was the first (in the JSTOR database) to use the term in an abstract, including it 11 times over the 1950s and 1960s before the first sociology journal (Journal of Health and Social Behavior) finally followed suit in 1971.

That 1971 article projected a tone that persists to this day. In a paragraph tacked onto the end of the paper, Kohn and Mercer speculated that inflated claims about the dangers of marijuana “may actually contribute to dangerous forms of drug abuse among less well-educated youth” (although the paper was a survey of college students). “If this is the case,” they continued, “then the best corrective may be to revise law, social policy, and official information in line with the best current scientific knowledge about drugs and their effect.” The analysis in the paper had nothing to do with anti-drug policy, instead pursuing an interesting empirical examination of the relationship of ideology (rebellious versus authoritarian) and drug use. The “implications” are vague and unconnected to any actually-existing policy debate (and none is cited). Being in this case both banal and hopelessly idealistic — intellectual bedfellows that find themselves miserably at home in the sociological space many in the public deride as “academic” — it’s hard to imagine the paper having any policy effect. Not that there’s anything wrong with that.

Fifty years later, “policy implications” has become an institution in academic sociology — by no means universal, but a fixed feature of the landscape, demanded by some editors, reviewers, advisors, and funders. The prevalence of this trope coincides with the imperative for “engagement” (which I’ve written about previously) driven both by our internal sense of mission and our capitulation to external pressure to justify the existence of our work. These are admirable impulses, but they’re poorly served by many of our current practices. I hope this discussion of “policy implications,” and the suggestions that follow, help push us toward more productive responses.

How it’s done

Most sociologists don’t do a lot of policy work. It’s not our language or social or professional milieu, and often not part of our formal training. So what do we mean, in theory and practice, when we offer “policy implications” for our research? There is a very wide range of applications, from evaluations of specific local policies to critiques of state power itself. I collected a lot of examples which I’ll describe, but first a very prominent one, from “Social Conditions as Fundamental Causes of Health Inequalities: Theory, Evidence, and Policy Implications, by Phelan, Link, and Tehranifar (2010). Their promise of policy implications is right in the title. From the policy implications section, here is a list of policies intended to reduce inequality in social conditions:

“Policies relevant to fundamental causes of disease form a major part of the national agenda, whether this involves the minimum wage, housing for homeless and low-income people, capital-gains and estate taxes, parenting leave, social security, head-start programs and college-admission policies, regulation of lending practices, or other initiatives of this type.”

Then, in the conclusion, they explain that in addition to leveling inequalities in social condition, we need policies that “minimiz[e] the extent to which socioeconomic resources buy a health advantage” — in the U.S. context, this is interpretable as universal healthcare.

These are almost broad enough — considered together — to constitute a worldview (or perhaps a party platform) rather than a specific policy prescription. If this were actual policy analysis, we would have to be concerned with, for example, the extent which policies to raise the minimum wage, raise taxes, house the homeless, and expand educational opportunity actually produce reductions in inequality, and which of these is most effective, or important, or feasible, and so on. But this is not policy analysis, and none is cited. These are one step down from documenting wage disparities and offering socialism as “policy implications.” This is a review paper, mostly theory and summarizing existing evidence — which makes it more suitable than the implications attached to many narrow empirical papers (see below). It has been very influential, influencing thousands of students and researchers, and maybe people in policy settings as well (one could try to assess that), by helping to establish the connection between health inequality and inequality on other dimensions. Important work. But the way I read the term, this is too broad to be reduced to “policy implications” — it’s more like social implications, or theoretical implications.

127 more examples

To generalize about the practice of “policy implications,” I collected some data. I used a “topic” search in Web of Science, which searches title, abstract, and keywords, for the phrase “policy implications,” in articles from 2010 to 2020. This tree map from WoS shows the disciplinary breakdown of the journals with the search term, which remains dominated by economics.

I chose the sociology category, then weeded out journals that were very interdisciplinary (like Journal of Marriage and Family), and some articles that turned out to be false positives, and ended up with 127 articles in these 52 journals.*

First I read all the abstracts and came up with a three-category code for abstracts that (1) had specific policy implications, (2) made general policy pronouncements, or (3) just promised policy implications. Here are some details.

Of the 127 abstracts, only two had what I read as specific policy implications. Martin (2018) wrote, “for dietary recommendations to transform eating practices it is necessary to take into account how, while cooking, actors draw on these various forms of valuation.” And Andersen and van de Werfhorst (2010) wrote, “strengthening the skill transparency of the education system by increasing secondary and tertiary-level differentiation may strengthen the relationship between education and occupation.” These aren’t as specific as particular pieces of legislation or policies, but close enough.

I put 29 papers in the general pronouncements category. For example, I put Phalen, Link, and Tehranifar (2010) in this category. In another, Wiborg and Hansen (2018) wrote that their findings implied that “increasing equal educational opportunities do not necessarily lead to greater opportunities in the labor market and accumulation of wealth” (reading inside the paper confirmed this is the extent of the discussion). This by Stoilova, Ilieva-Trichkova, and Bieri (2020) is archetypal: “The policy implications are to more closely consider education in the transformation of gender-sensitive norms during earlier stages of child socialization and to design more holistic policy measures which address the multitude of barriers individuals from poor families and ethnic/migrant background face” (reading inside the paper, there are several other statements at the same level). I read three other papers in this category and found similar general implications, e.g., “if the policy goal is to enhance the bargaining position of labour and increase its share of income, spending policy should prioritise the expenditures on the public sector employment” (Pensiero 2017).

“Policy implications are discussed”

The largest category, 97 papers (76%) offered no policy implications in the abstract, but rather offered some version of “policy implications are discussed.” It is an odd custom, to mention the existence of a section in the paper without divulging its contents. Anyway, to get a better sense of what “policy implications are discussed” means, I randomly sampled 10 of the papers in this category to read the relevant section. (I have no beef with these papers or their authors, they were selected randomly, and I’m only commented on what may be the least important aspect of their contributions.)

The first category among these, with 5 of the 10 papers, are those without substantive policy contributions. Some have banal statements at the end, which the author and most readers probably already believed, such as, “If these results are replicated, programs should be implemented that will solicit the help of grandparents in addition to parents” (Liu 2016). I also include here Visser et al. (2013), who conclude that their “findings show general support for basic ecological perspectives of fear of crime and feelings of unsafety,” e.g., that reducing crime in the absence of better social protection will not improve levels of fear and feelings of unsafety. I code this one as without substantive policy contribution because that’s a big claim about the entire state policy structure, which would require much more evidence to adjudicate, much less implement, and the paper offers only a small empirical nudge in one direction (which, again, is fine!).

Several in this category offered essentially no policy implications. This includes Wang (2010) who states at the outset that, “the question of motives for private transfers is one with important policy implications” for public transfer programs like food stamps and social security, but never comes back to discuss policies relevant to the results. And Barrett and Pollack (2011), who recommend that health practitioners develop better understanding of the issues raised and that “contemporary sexual civil rights efforts” pay more attention to sexual discrimination. Finally, Lepianka (2015), reports on media depictions of poverty and related policy, but doesn’t offer any implications of the study itself for policy. So, half of these had abstracts that were overpromising in terms of policy.

The other 5 papers do include substantive policy implications, explored to varying degrees. One is hard-hitting but brief: Shwed et al. (2018), whose analysis has direct implications which they do not thoroughly discuss. Their “unequivocal” result is that “multicultural schools, with their celebration of difference, entail a cost in terms of social integration compared to assimilationist schools—they enhance ethnic segregation in friendship networks. … While empowering minorities, it enhances social closure between groups.” The empirical analysis they did could no doubt be used as part of a policy analysis on the question of cultural orientation of schools in Israel.

Three offer sustained policy discussions, including the very specific: an endorsement of prison-based dog training programs (Antonio et al. 2017); a critique of sow-housing policy in the European Union (de Krom 2015); and recommendations for environmental lending practices at the World Bank (Sommer at al. 2017). The last one qualifies, albeit at a very macro level: Gauchat et al.’s (2011) analysis of economic dependence on military spending in metropolitan areas, the implications of which surpass everyday policy debates but are of course relevant.

To summarize my reading, with percentages based on extrapolating my subsample (so, wide confidence intervals): 23% of papers promising policy implications had none, and 38% had either vague statements or general statements that did not rely on empirical findings in the paper. The remaining 40% had substantive policy discussion and/or specific recommendations.

This is a quick coding and not validated. Others might treat differently papers that report an effect and then recommend changing the prevalence of the independent variable — e.g., poverty causes poor health; policies should reduce poverty — which I coded as not substantive. For example, I coded this from Thoits (2010) as not substantive or specific: “policies and programs should target children who are at long-term health risk due to early exposure to poverty, inadequate schools, and stressful family circumstances.” You could say, “policies should attempt to make life better,” but it’s not clear you need research for that. Anyway, my own implications (below) don’t depend on a precise accounting.

Implications

I am really, really not saying these are bad papers, or wrong to do what they did. I am not criticizing them, but rather the institutional convention that classifies the attempt to make our research relevant as “policy implications,” even when we have nothing specific to say about real policies, and then rewards sociologists for shoehorning their conclusions into such a frame.

Let me give an example of an interesting and valuable paper that is burdened by its policy implications. “The impact of parental migration on depression of children: new evidence from rural China,” by Yue et al. (2020) used a survey of families in China to assess the relationship between parental migration, remittances, household labor burdens, parent-child communication, and children’s symptoms of depression. After regression models with direct and indirect effects on children’s depression, including both children who were “left behind” by migrating parents and those who weren’t, they conclude: “non-material resources (parent-child communication, parental responsiveness, and self-esteem) play a much more important role in child’s depression than material resources (remittances).” Interesting result. Seems well done, with good data. The policy suggestions that follow are to encourage parent-child communication (e.g., through local government programs) and teach children in school that they are not abandoned by parents who migrate.

What is wrong with this? First, Yue et al. (2020) is an example of a common model that amounts to, “based on our regressions, more of this variable would be good.” It seems logical, but a serious approach to the question would have to be based on evidence that such programs actually have their intended effect, and that they would be better than directing the money or other resources toward something else. That would be an unreasonable burden for the authors, and slow the production of useful empirical results. So we’re left with something superficial that distracts more than it adds. Further (and here I hope to win some converts to my view), these policy implication sections are a major source of of peer review friction — reviewers demanding them, reviewers hating them, authors contorting themselves, and so on. Much better, in my view, would be to just add the knowledge produced by papers like this to the great hopper of knowledge, and let it contribute to a real policy analysis down the road.

Empirical peer-reviewed sociology articles should be shorter, removing non-essential parts of the paper that are major sources of peer review bog-down. Having different kinds of work reviewed and approved together in a single paper — a lengthy literature review, a theoretical claim, an empirical analysis, and a set of policy implications — creates inefficiencies in the peer review process. Why should a whole 60-page paper be rejected because one part of it (the policy implications, say) is rejected by one out of three reviewers? This is very wasteful. It puts reviewers in a position to review aspects of the work they aren’t qualified to judge. And it skews incentives by rewarding the less important parts of our work. Of course it’s reasonable to spend a few paragraphs stating the relevance of the question in the paper, but not a whole treatise (in the front and back) of every paper.

Advice for sociologists

1. Don’t try to pin big conclusions on a single piece of peer reviewed empirical research. That’s a sad legacy of a time when publishing was hard, sociologists had few opportunities to do so, and peer reviewed journals were the source of validation we were expected to rely on. So you devoted years of your life to a small number of “publications,” and those were the sum total of your intellectual production. We have a lot of other ways to express our social and political views now, and we should use them. The fact that you have a PhD, a job, and have published peer reviewed research, are all sources of legitimacy you can draw on to get people to pay attention to your writing.

2. Write for the right audience. If you are serious about influencing policy, write for staffers doing research for advocacy organizations, activists, or campaigns. If you want to influence the public, write in lay terms in venues that draw regular people as readers. If you want to set the agenda for funding agencies, write review pieces that synthesize research and make the case for moving in the right direction. These are all different kinds of writing, published in different venues. Crucially, none of them rely only on the empirical results of a single analysis, nor should they. The last three paragraphs of your narrow empirical research paper — excellent, important, and cutting-edge as it is — will not reach these different audiences.

3. Stop asking researchers to tack superficial policy implications sections onto the end of their papers. If you are a reviewer or an editor, stop demanding longer literature reviews and conclusions. Start rewarding the most important part of the work, the part you are qualified to evaluate.

4. If you are in an academic department, on a hiring committee, or on a promotion and tenure committee, look at the whole body of work, including the writing outside peer-reviewed journals. No one expects to get tenure from writing an op-ed, but people who work to reach different audiences may be building a successful career in which peer-reviewed research is a foundational building block. Look for the connections, and reward the people who make them.


*The full sample (metadata and abstracts) is available on Zotero. Some are open access, some I got through my library, but all are available from from Sci-Hub (which steals them so you don’t have to).

References mentioned in the text:

Andersen, Robert, and Herman G. van de Werfhorst. 2010. “Education and Occupational Status in 14 Countries: The Role of Educational Institutions and Labour Market Coordination.” British Journal of Sociology 61(2):336–55. doi: 10.1111/j.1468-4446.2010.01315.x.

Antonio, Michael E., Rosalyn G. Davis, and Susan R. Shutt. 2017. “Dog Training Programs in Pennsylvania’s Department of Corrections Perceived Effectiveness for Inmates and Staff.” Society & Animals 25(5):475–89. doi: 10.1163/15685306-12341457.

de Krom, Michiel P. M. M. 2015. “Governing Animal-Human Relations in Farming Practices: A Study of Group Housing of Sows in the EU.” Sociologia Ruralis 55(4):417–37. doi: 10.1111/soru.12070.

Gauchat, Gordon, Michael Wallace, Casey Borch, and Travis Scott Lowe. 2011. “The Military Metropolis: Defense Dependence in U.S. Metropolitan Areas.” City & Community 10(1):25–48. doi: 10.1111/j.1540-6040.2010.01359.x.

Kohn, Paul M., and G. W. Mercer. 1971. “Drug Use, Drug-Use Attitudes, and the Authoritarianism-Rebellion Dimension.” Journal of Health and Social Behavior 12(2):125–31. doi: 10.2307/2948519.

Lepianka, Dorota. 2015. “Images of Poverty in a Selection of the Polish Daily Press.” Current Sociology 63(7):999–1016. doi: 10.1177/0011392115587021.

Liu, Ruth X. 2018. “Physical Discipline and Verbal Punishment: An Assessment of Domain and Gender-Specific Effects on Delinquency Among Chinese Adolescents.” Youth & Society 50(7):871–90. doi: 10.1177/0044118X15618836.

Martin, Rebeca Ibanez. 2018. “Thinking with La Cocina: Fats in Spanish Kitchens and Dietary Recommendations.” Food Culture & Society 21(3):314–30. doi: 10.1080/15528014.2018.1451039.

Pensiero, Nicola. 2017. “In-House or Outsourced Public Services? A Social and Economic Analysis of the Impact of Spending Policy on the Private Wage Share in OECD Countries.” International Journal of Comparative Sociology 58(4):333–51. doi: 10.1177/0020715217726837.

Phelan, Jo C., Bruce G. Link, and Parisa Tehranifar. 2010. “Social Conditions as Fundamental Causes of Health Inequalities: Theory, Evidence, and Policy Implications.” Journal of Health and Social Behavior 51:S28–40. doi: 10.1177/0022146510383498.

Shwed, Uri, Yuval Kalish, and Yossi Shavit. 2018. “Multicultural or Assimilationist Education: Contact Theory and Social Identity Theory in Israeli Arab-Jewish Integrated Schools.” European Sociological Review 34(6):645–58. doi: 10.1093/esr/jcy034.

Sommer, Jamie M., John M. Shandra, and Michael Restivo. 2017. “The World Bank, Contradictory Lending, and Forests: A Cross-National Analysis of Organized Hypocrisy.” International Sociology 32(6):707–30. doi: 10.1177/0268580917722893.

Stoilova, Rumiana, Petya Ilieva-Trichkova, and Franziska Bieri. 2020. “Work-Life Balance in Europe: Institutional Contexts and Individual Factors.” International Journal of Sociology and Social Policy 40(3–4):366–81. doi: 10.1108/IJSSP-08-2019-0152.

Thoits, Peggy A. 2010. “Stress and Health: Major Findings and Policy Implications.” Journal of Health and Social Behavior 51:S41–53. doi: 10.1177/0022146510383499.

Visser, Mark, Marijn Scholte, and Peer Scheepers. 2013. “Fear of Crime and Feelings of Unsafety in European Countries: Macro and Micro Explanations in Cross-National Perspective.” Sociological Quarterly 54(2):278–301. doi: 10.1111/tsq.12020.

Wang, Jingshu. 2010. “Motives for Intergenerational Transfers: New Test for Exchange.” American Journal of Economics and Sociology 69(2):802–22. doi: 10.1111/j.1536-7150.2010.00725.x.

Wiborg, Oyvind N., and Marianne N. Hansen. 2018. “The Scandinavian Model during Increasing Inequality: Recent Trends in Educational Attainment, Earnings and Wealth among Norwegian Siblings.” Research in Social Stratification and Mobility 56:53–63. doi: 10.1016/j.rssm.2018.06.006.

Yue, Zhongshan, Zai Liang, Qian Wang, and Xinyin Chen. 2020. “The Impact of Parental Migration on Depression of Children: New Evidence from Rural China.” Chinese Sociological Review 52(4):364–88. doi: 10.1080/21620555.2020.1776601.

Don’t both-sides the war on truth

Glad to see political obituaries for Trump appearing. But don’t let them both-sides it. Case in point is George Packer’s “The Legacy of Donald Trump” in the Atlantic (online version titled, “A Political Obituary for Donald Trump“).

Packer is partly right in his comparison of Trump’s lies to those of previous presidents:

Trump’s lies were different. They belonged to the postmodern era. They were assaults against not this or that fact, but reality itself. They spread beyond public policy to invade private life, clouding the mental faculties of everyone who had to breathe his air, dissolving the very distinction between truth and falsehood. Their purpose was never the conventional desire to conceal something shameful from the public.

He’s right that the target is truth itself, but wrong to attribute this to postmodernism. Trump is well-grounded in modernist authoritarianism, albeit with contemporary cultural flourishes. This ground was well covered by Michiko Kakutani, Jason Stanley, and Adam Gopnik, who wrote the week before Trump’s inauguration:

there is nothing in the least “postmodern” about Trump. The machinery of demagogic authoritarianism may shift from decade to decade and century to century, taking us from the scroll to the newsreel to the tweet, but its content is always the same. Nero gave dictates; Idi Amin was mercurial. Instruments of communication may change; demagogic instincts don’t.

This distinction matters, between Trump the modern authoritarian and Trump the victim of a world gone mad. You can see why later in Packer’s piece, when he both-sides it:

Monopoly of public policy by experts—trade negotiators, government bureaucrats, think tankers, professors, journalists—helped create the populist backlash that empowered Trump. His reign of lies drove educated Americans to place their faith, and even their identity, all the more certainly in experts, who didn’t always deserve it (the Centers for Disease Control and Prevention, election pollsters). The war between populists and experts relieved both sides of the democratic imperative to persuade. The standoff turned them into caricatures.

Disagree. Public health scientists and political pollsters are sometimes wrong, and even corrupt, including during the Trump era, but their failures are not an assault on truth itself (I don’t know what about the CDC he’s referring to, but except for some behavior by Trump appointees the same applies). We in the rational knowledge business have not been relieved of our democratic imperatives by the machinations of authoritarians. No matter how we are seen by Trump’s followers, we are not caricatures. The rise of authoritarianism and its populist armies can’t be laid at the feet of the reign of experts. In one sense, of course, anti-vaxxers only exist because there are vaccines. But that’s not a both-sides story. Everyone alive today is alive because of the reign of experts, more of less.

This reminds me of Jonah Goldberg’s ridiculous (but very common, among conservatives) attempt to blame anti-racists for racism: “The grave danger, already materializing, is that whites and Christians respond to this bigotry [i.e., being called racist, homophobic, and Islamophobic] and create their own tribal identity politics.” If Packer objects to the comparison, that’s on him.

That said, the know-nothing movement that Trump now leads obviously creates direct challenges that the forces of truth must rise to meet. The imperative for “engagement” among social scientists — the need to communicate our research and its implications, which I’ve discussed before — is partly driven by this reality. In the social sciences we have an additional burden because our scholarship is directly relevant to politics, so compared with the other sciences we are subject to heightened scrutiny and suspicion — our accomplishments are less the invisible infrastructure of daily survival and more the contested terrain of social and cultural conflict.

And, judging by our falling social science enrollments (except economics), we’re not winning.

So we have a lot of work to do, but we’re not responsible for the war on truth.

Data analysis shows Journal Impact Factors in sociology are pretty worthless

The impact of Impact Factors

Some of this first section is lifted from my blockbuster report, Scholarly Communication in Sociology, where you can also find the references.

When a piece of scholarship is first published it’s not possible to gauge its importance immediately unless you are already familiar with its specific research field. One of the functions of journals is to alert potential readers to good new research, and the placement of articles in prestigious journals is a key indicator.

Since at least 1927, librarians have been using the number of citations to the articles in a journal as a way to decide whether to subscribe to that journal. More recently, bibliographers introduced a standard method for comparing journals, known as the journal impact factor (JIF). This requires data for three years, and is calculated as the number of citations in the third year to articles published over the two prior years, divided by the total number of articles published in those two years.

For example, in American Sociological Review there were 86 articles published in the years 2017-18, and those articles were cited 548 times in 2019 by journals indexed in Web of Science, so the JIF of ASR is 548/86 = 6.37. This allows for a comparison of impact across journals. Thus, the comparable calculation for Social Science Research is 531/271 = 1.96, and it’s clear that ASR is a more widely-cited journal. However, comparisons of journals in different fields using JIFs is less helpful. For example, the JIF for the top medical journal, New England Journal of Medicine, is currently 75, because there are many more medical journals publishing and citing more articles at higher rates, and more quickly than do sociology journals. (Or maybe NEJM is just that much more important.)

In addition to complications in making comparisons, there are problems with JIFs (besides the obvious limitation that citations are only one possible evaluation metric). They depend on what journals and articles are in the database being used. And they mostly measure short-term impact. Most important for my purposes here, however, is that they are often misused to judge the importance of articles rather than journals. That is, if you are a librarian deciding what journal to subscribe to, JIF is a useful way of knowing which journals your users might want to access. But if you are evaluating a scholar’s research, knowing that they published in a high-JIF journal does not mean that their article will turn out to be important. It is especially wrong to look at an article that’s old enough to have citations you could count (or not) and judge its quality by the journal it’s published in — but people do that all the time.

To illustrate this, I gathered citation data from the almost 2,500 articles published in 2016-2019 in 15 sociology journals from the Web of Science category list.* In JIF these rank from #2 (American Sociological Review, 6.37) to #46 (Social Forces, 1.95). I chose these to represent a range of impact factors, and because they are either generalist journals (e.g., ASR, Sociological Science, Social Forces) or sociology-focused enough that almost any article they publish could have been published in a generalist journal as well. Here is a figure showing the distribution of citations to those articles as of December 2020, by journal, ordered from higher to lower JIF.

After ASR, Sociology of Education, and American Journal of Sociology, it’s hard to see much of a slope here. Outliers might be playing a big role (for example that very popular article in Sociology of Religion, “Make America Christian Again: Christian Nationalism and Voting for Donald Trump in the 2016 Presidential Election,” by Whitehead, Perry, and Baker in 2018). But there’s a more subtle problem, which is the timing of the measures. My collection of articles is 2016-2019. The JIFs I’m using are from 2019, based on citations to 2017-2018 articles. These journals bounce around; for example, Sociology of Religion jumped from 1.6 to 2.6 in 2019. (I address that issue in the supplemental analysis below.) So what is a lazy promotion and tenure committee, which is probably working off a mental reputation map at least a dozen years old, to do?

You can already tell where I’m going with this: In these sociology journals, there is so much noise in citation rates within the journals, compared to any stable difference between them, that outside the very top the journal ranking won’t much help you predict how much a given paper will be cited. If you assume a paper published in AJS will be more important than one published in Social Forces, you might be right, but if the odds that you’re wrong are too high, you just shouldn’t assume anything. Let’s look closer.

Sociology failure rates

I recently read this cool paper (also paywalled in the Journal of Informetrics) that estimates the odds of this “failure probability,” the odds that your guess about which paper will be more impactful based on the journal title turns out to be wrong. When JIFs are similar, the odds of an error are very high, like a coin flip. “In two journals whose JIFs are ten-fold different, the failure probability is low,” Brito and Rodríguez-Navarro conclude. “However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.”

Their formulas look pretty complicated to me, so for my sociology approach I just did it by brute force (or if you need tenure you could call it a Monte Carlo approach). I randomly sampled 100,000 times from each possible pair of journals, then calculated the percentage of times the article with more citations was from a journal with a higher impact factor. For example, in 100,000 comparisons of random pairs sampled from ASR and Social Forces (the two journals with the biggest JIF spread), 73% of the time the ASR article had more citations.

Is 73% a lot? It’s better than a coin toss, but I’d hate to have a promotion or hiring decision be influenced by an instrument that blunt. Here are results of the 10.5 million comparisons I made (I love computers). Click to enlarge:

Outside of the ASR column, these are very bad; in the ASR column they’re pretty bad. For example, a random article from AJS only has more citations than one from the 12 lower-JIF journals 59% of the time. So if you’re reading CVs, and you see one candidate with a two-year old AJS article and one with a two-year-old Work & Occupations article, what are you supposed to do? You could compare the actual citations the two articles have gotten, or you could assess their quality of impact some other way. You absolutely should not just skim the CV and assume the AJS article is or will be more influential based on the journal title alone; the failure probability of that assumption is too high.

On my table you can also see some anomalies, of the kind which plague this system. See all that brown in the BJS and Sociology of Religion columns? That’s because both of those journals had sudden increases in their JIF, so their more recent articles have more citations, and most of the comparisons in this table (like in your memory, probably) are based on data from a few years before that. People who published in these journals three years ago are today getting an undeserved JIF bounce from having these titles on their CVs. (See the supplemental analysis below for more on this.)

Conclusion

Using JIF to decide which papers in different sociology journals are likely to be more impactful is a bad idea. Of course, lots of people know JIF is imperfect, but they can’t help themselves when evaluating CVs for hiring or promotion. And when you show them evidence like this, they might say “but what is the alternative?” But as Brito & Rodríguez-Navarro write: “if something were wrong, misleading, and inequitable the lack of an alternative is not a cause for continuing using it.” These error rates are unacceptably high.

In sociology most people won’t own up to relying on impact factors, but most people (in my experience) do judge research by where it’s published all the time. If there is a very big difference in status — enough to be associated with an appreciably different acceptance rate, for example — that’s not always wrong. But it’s a bad default.

In 2015 the biologist Michael Eisen suggested that tenured faculty should remove the journal titles from their CVs and websites, and just give readers the title of the paper and a link to it. He’s done it for his lab’s website, and I urge you to look at it just to experience the weightlessness of an academic space where for a moment overt prestige and status markers aren’t telling you what to think. I don’t know how many people have taken him up on it. I did it for my website, with the explanation, “I’ve left the titles off the journals here, to prevent biasing your evaluation of the work before you read it.” Whatever status I’ve lost I’ve made up for in virtue-signaling self-satisfaction — try it! (You can still get the titles from my CV, because I feel like that’s part of the record somehow.)

Finally, I hope sociologists will become more sociological in their evaluation of research — and of the systems that disseminate, categorize, rank, and profit from it.

Supplemental analysis

The analysis thus far is, in my view, a damning indictment of real-world reliance on the Journal Impact Factor for judging articles, and thus the researchers who produce them. However, it conflates two problems with the JIF. First is the statistical problem of imputing status from an aggregate to an individual, when the aggregate measure fails to capture variation that is very wide relative to the difference between groups. Second, more specific to JIF, is the reliance on a very time-specific comparison: citations in year three to publications in years one and two. Someone could do (maybe already has) an analysis to determine the best lag structure for JIF to maximize its predictive power, but the conclusions from the first problem imply that’s a fool’s errand.

Anyway, in my sample the second problem is clearly relevant. My analysis relies strictly on the rank-ordering provided by the JIF to determine whether article comparisons succeed or fail. However, the sample I drew covers four years, 2016-2019, and counts citations to all of them through 2020. This difference in time window produces a rank ordering that differs substantially (the rank order correlation is .73), as you can see:

In particular, three journals (BJS, SOR, and SFO) moved more than five spots in the ranking. A glance at the results table above shows that these journals are dragging down the matching success rate. To pull these two problems apart, I repeated the analysis using the ranking produced within the sample itself.

The results are now much more straightforward. First, here is the same box plot but with the new ordering. Now you can see the ranking more clearly, though you still have to squint a little.

And in the match rate analysis, the result is now driven by differences in means and variances rather than by the mismatch between JIF and sample-mean rankings (click to enlarge):

This makes a more logical pattern. The most differentiated journal, ASR, has the highest success rate, and the journals closest together in the ranking fail the most. However, please don’t take from this that such a ranking becomes a legitimate way to judge articles. The overall average on this table is still only 58%, up only 4 points from the original table. Even with a ranking that more closely conforms to the sample, this confirms Brito and Rodríguez-Navarro’s conclusion: “[when rankings] of the journals are not so different … the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.”

These match numbers are too low to responsibly use in such a way. These major sociology journals have citation rates that are too variable, and too similar at the mean, to be useful as a way to judge articles. ASR stands apart, but only because of the rest of the field. Even judging an ASR paper against its lower-ranked competitors produces a successful one-to-one ranking of papers just 72% of the time — and that only rises to 82% with the least-cited journal on the list.

The supplemental analysis is helpful for differentiating the multiple problems with JIF, but it does nothing to solve the problem of using journal citation rates to evaluate individual articles.


*The data and Stata code I used is up here: osf.io/zutws. This includes the lists of all articles in the 15 journals from 2016 to 2020 and their citation counts as of the other day (I excluded 2020 papers from the analysis, but they’re in the lists). I forgot to save the version of the 100k-case random file that I used to do this, so I guess that can never be perfectly replicated; but you can probably do it better anyway.

Sociologist, scientist? Toward transparency, accountability, and a sharing culture

With the help of the designer Brigid Barrett, I have a new website at philipncohen.com, and a redesigned blog to match (which you’re looking at now). We decided on the tagline, “Sociologist / Demographer” for the homepage photo. It’s true I am those two things, but I also like how they modify each other, a type of sociologist and a type of demographer. First some reflections, then a little data.

I shared the website on Twitter, and wrote this in a thread:

Having “sociologist” attached to your name is not going to signal scientific rigor to the public in the way that other discipline labels might (like, I think, “demographer”). A lot of sociologists, as shown by their behavior, are fine with that. Your individual behavior as a researcher can shape the impression you make, but it will not change the way the discipline is seen. Until the discipline — especially our associations but also our departments — adopts (and communicates) scientific practices, that’s how it will be. As an association, ASA has shown little interest in this, and seems unlikely to soon.

A substantial portion of sociologists rejects the norms of science. Others are afraid that adopting them will make their work “less than” within the discipline’s hierarchy. For those of us concerned about this, the practices of science are crucial: openness, transparency, reproducibility. We need to find ways at the sub-discipline level to adopt and communicate these values and build trust in our work. Building that trust may require getting certain publics to see beyond the word “sociologist,” rather than just see value in it. They will see our open practices, our shared data and code, our ability to admit mistakes, embrace uncertainty, and entertain alternative explanations.

There are other sources of trust. For example, taking positions on social issues or politics is also a way of building trust with like-minded audiences. These are important for some sociologists, and truly valuable, but they’re different from science. Maybe unreasonably, I want both. I want some people to give my work a hearing because I take antiracist or feminist positions in my public work, for example. And also because I practice science in my research, with the vulnerability and accountability that implies. Some people would say my public political pronouncements undermine not just my science, but the reputation of the discipline as a whole. I can’t prove they’re wrong. But I think the roles of citizen and scholar are ultimately compatible. Having a home in a discipline that embraced science and better communicated its value would help. A scientific brand, seal of approval, badges, etc., would help prevent my outspokenness from undermining my scientific reputation.

One reply I got, confirming my perception, was, “this pretence of natural science needs to be resisted not indulged.” Another wrote: “As a sociologist and an ethnographer ‘reproducibility’ will always be a very weak and mostly inapplicable criterion for my research. I’m not here to perform ‘science’ so the public will accept my work, I’m here to seek truth.” Lots of interesting responses. Several people shared this old review essay arguing sociology should be more like biology than like physics, in terms of epistemology. The phrase “runaway solipsism” was used.

I intended my tweets to focus on the open “science practices” which which I have been centrally concerned, centered on scholarly communication: openness, transparency, replicability. That is, I am less interested in the epistemological questions of what is meaning and truth, and solipsism, and more concerned with basic questions like, “How do we know researchers are doing good research, or even telling the truth?” And, “How can we improve our work so that it’s more conducive to advancing research overall?”

Whether or not sociology is science, we should have transparency, accountability, and a sharing culture in our work. This makes our work better, and also maybe increases our legitimacy in public.

Where is ASA?

To that end, as an elected member of the American Sociological Association Committee on Publications, two years ago I proposed that the association adopt the Transparency and Openness Promotion Guidelines from the Center for Open Science, and to start using their Open Science Badges, which recognize authors who provide open data, open materials, or use preregistration for their studies. It didn’t go over well. Some people are very concerned that rewarding openness with little badges in the table of contents, which presumably would go mostly to quantitative researchers, would be seen as penalizing qualitative researchers who can’t share their data, thus creating a hierarchy in the discipline.

So at the January 2019 meeting the committee killed that proposal so an “ad hoc committee could be established to evaluate the broader issues related to open data for ASA journals.” Eight months later, after an ad hoc committee report, the publications committee voted to “form an ad hoc committee [a different one this time] to create a statement regarding conditions for sharing data and research materials in a context of ethical and inclusive production of knowledge,” and to, “review the question about sharing data currently asked of all authors submitting manuscripts to incorporate some of the key points of the Committee on Publications discussion.” The following January (2020), the main committee was informed that the ad hoc committee had been formed, but hadn’t had time to do its work. Eight months later, the new ad hoc committee proposed a policy: ask authors who publish in ASA journals to declare whether their data and research materials are publicly available, and if not why not, with the answers to be appended in a footnote to each article. The minutes aren’t published yet, but I seem to remember us approving the proposal (minutes should appear in the spring, 2021). So, after two years, all articles are going to report whether or not materials are available. Someday. Not bad, for ASA!

To see how we’re doing in the meantime, and inspired by the Twitter exchange, I flipped through the last four issues of American Sociological Review, the flagship journal of the association, to assess the status of data and materials sharing. That is, 24 articles published in 2020. The papers and what I found are listed in the table below.

There were six qualitative papers and three mixed qualitative/quantitative papers. None of these provided access to research materials such as analysis code, interview guides, survey instruments, or transcripts — or provided an explanation for why these materials were not available. Among the 15 quantitative papers, four provided links to replication packages, with the code required to replicate the analyses in the papers. Some of these used publicly available data, or included the data in the package, while the others would require additional steps to gain access to the data. The other 11 provided neither data nor code or other materials.

That’s just from flipping through the papers, searching for “data,” “code,” “available,” reading the acknowledgments and footnotes, and so on. So I may have missed something. (One issue, which maybe the new policy will improve, is that there is no standard place on the website or in the paper for such information to be conveyed.) Many of the papers include a link on the ASR website to “Supplemental Material,” but in all cases this was just a PDF with extra results or description of methods, and did not include computer code or data. The four papers that had replication packages all linked to external sites, such as Github or Dataverse, which are great but are not within the journal’s control, so the journal can’t ensure they are correct, or that they are maintained over time. Still, those are great.

I’m not singling out papers (which, by the way, seem excellent and very interesting — good journal!), just pointing out the pattern. Let’s just say that any of these authors could have provided at least some research materials in support of the paper, if they had been personally, normatively, or formally compelled to do so.

Why does that matter?

First, providing things like interview guides, coding schemes, or statistical code, is helpful to the next researcher who comes along. It makes the article more useful in the cumulative research enterprise. Second, it helps readers identify possible errors or alternative ways of doing the analysis, which would be useful both to the original authors and to subsequent researchers who want to take up the baton or do similar work. Third, research materials can help people determine if maybe, just maybe, and very rarely, the author is actually just bullshitting. I mean literally, what do we have besides your word as a researcher that anything you’re saying is true? Fourth, the existence of such materials, and the authors’ willingness to provide them, signals to all readers a higher level of accountability, a willingness to be questioned — as well as a commitment to the collective effort of the research community as a whole. And, because it’s such an important journal, that signal might boost the reputation for reliability and trustworthiness of the field overall.

There are vast resources, and voluminous debates, about what should be shared in the research process, by whom, for whom, and when — and I’m not going to litigate it all here. But there is a growing recognition in (almost) all quarters that simply providing the “final” text of a “publication” is no longer the state of the art in scholarly communication, outside of some very literary genres of scholarship. Sociology is really very far behind other social science disciplines on this. And, partly because of our disciplinary proximity to the scholars who raise objections like those I mentioned above, even those of us who do the kind of work where openness is most normative (like the papers below that included replication packages), can’t move forward with disciplinary policies to improve the situation. ASR is paradigmatic: several communities share this flagship journal, the policies of which are serving some more than others.

What policies should ASA and its journals adopt to be less behind? Here are a few: Adopt TOP badges, like the American Psychological Association has; have their journals actually check the replication code to see that it produces the claimed results, like the American Economic Association does; publish registered reports (peer review before results known), like all experimental sciences are doing; post peer review reports, like Nature journals, PLOS, and many others do. Just a few ideas.

Change is hard. Even if we could agree on the direction of change. Brian Nosek, director of the Center for Open Science (COS), likes to share this pyramid, which illustrates their “strategy for culture and behavior change” toward transparency and reproducibility. The technology has improved so that the lowest two levels of the pyramid are pretty well taken care of. For example, you can easily put research materials on COS’s Open Science Framework (with versioning, linking to various cloud services, and collaboration tools), post your preprint on SocArXiv (which I direct), and share them with the world in a few moments, for free. Other services are similar. The next levels are harder, and that’s where we in sociology are currently stuck.

COS_Culture_and_Behavior_Change_model.width-500 1

For some how-to reading, consider, Transparent and Reproducible Social Science Research: How to Do Open Science, by Garret Christensen, Jeremy Freese, and Edward Miguel (or this Annual Review piece on replication specifically). For an introduction to Scholarly Communication in Sociology, try my report with that title. Please feel free to post other suggestions in the comments.


Four 2020 issues of American Sociological Review

ReferenceQuant/QualData typeData available?Code available?Note
Faber, Jacob W. 2020. “We Built This: Consequences of New Deal Era Intervention in America’s Racial Geography.” American Sociological Review 85 (5): 739–75.QuantCensus+NoNo
Brown, Hana E. 2020. “Who Is an Indian Child? Institutional Context, Tribal Sovereignty, and Race-Making in Fragmented States.” American Sociological Review 85 (5): 776–805. QualArchivalNoNo
Daminger, Allison. 2020. “De-Gendered Processes, Gendered Outcomes: How Egalitarian Couples Make Sense of Non-Egalitarian Household Practices.” American Sociological Review 85 (5): 806–29. QualInterviewsNoNo
Mazrekaj, Deni, Kristof De Witte, and Sofie Cabus. 2020. “School Outcomes of Children Raised by Same-Sex Parents: Evidence from Administrative Panel Data.” American Sociological Review 85 (5): 830–56. QuantAdministrativeNoUpon requestInfo on how to obtain data provided.
Becker, Sascha O., Yuan Hsiao, Steven Pfaff, and Jared Rubin. 2020. “Multiplex Network Ties and the Spatial Diffusion of Radical Innovations: Martin Luther’s Leadership in the Early Reformation.” American Sociological Review 85 (5): 857–94. QuantNetworkNoNoSays data is in the ASR online supplement but it’s not.
Smith, Chris M. 2020. “Exogenous Shocks, the Criminal Elite, and Increasing Gender Inequality in Chicago Organized Crime.” American Sociological Review 85 (5): 895–923. QuantNetworkNoNoCode described.
Storer, Adam, Daniel Schneider, and Kristen Harknett. 2020. “What Explains Racial/Ethnic Inequality in Job Quality in the Service Sector?” American Sociological Review 85 (4): 537–72. QuantSurveyNoNo
Ranganathan, Aruna, and Alan Benson. 2020. “A Numbers Game: Quantification of Work, Auto-Gamification, and Worker Productivity.” American Sociological Review 85 (4): 573–609. MixedMixedNoNo
Fong, Kelley. 2020. “Getting Eyes in the Home: Child Protective Services Investigations and State Surveillance of Family Life.” American Sociological Review 85 (4): 610–38. QualMixedNoNo
Musick, Kelly, Megan Doherty Bea, and Pilar Gonalons-Pons. 2020. “His and Her Earnings Following Parenthood in the United States, Germany, and the United Kingdom.” American Sociological Review 85 (4): 639–74. QuantSurveyYesYesOffsite replication package.
Burdick-Will, Julia, Jeffrey A. Grigg, Kiara Millay Nerenberg, and Faith Connolly. 2020. “Socially-Structured Mobility Networks and School Segregation Dynamics: The Role of Emergent Consideration Sets.” American Sociological Review 85 (4): 675–708. QuantAdministrativeNoNo
Schaefer, David R., and Derek A. Kreager. 2020. “New on the Block: Analyzing Network Selection Trajectories in a Prison Treatment Program.” American Sociological Review 85 (4): 709–37. QuantNetworkNoNo
Choi, Seongsoo, Inkwan Chung, and Richard Breen. 2020. “How Marriage Matters for the Intergenerational Mobility of Family Income: Heterogeneity by Gender, Life Course, and Birth Cohort.” American Sociological Review 85 (3): 353–80. QuantSurveyNoNo
Hook, Jennifer L., and Eunjeong Paek. 2020. “National Family Policies and Mothers’ Employment: How Earnings Inequality Shapes Policy Effects across and within Countries ,  National Family Policies and Mothers’ Employment: How Earnings Inequality Shapes Policy Effects across and within Countries.” American Sociological Review 85 (3): 381–416. QuantSurvey+YesYesOffsite replication package.
Doering, Laura B., and Kristen McNeill. 2020. “Elaborating on the Abstract: Group Meaning-Making in a Colombian Microsavings Program.” American Sociological Review 85 (3): 417–50. MixedSurvey+NoNo
Decoteau, Claire Laurier, and Meghan Daniel. 2020. “Scientific Hegemony and the Field of Autism.” American Sociological Review 85 (3): 451–76. QualArchivalNoNo“Information on the coding schema is available upon request.”
Kiley, Kevin, and Stephen Vaisey. 2020. “Measuring Stability and Change in Personal Culture Using Panel Data.” American Sociological Review 85 (3): 477–506. QuantSurveyYesYesOffsite replication package.
DellaPosta, Daniel. 2020. “Pluralistic Collapse: The ‘Oil Spill’ Model of Mass Opinion Polarization.” American Sociological Review 85 (3): 507–36. QuantSurveyYesYesOffsite replication package.
Simmons, Michaela Christy. 2020. “Becoming Wards of the State: Race, Crime, and Childhood in the Struggle for Foster Care Integration, 1920s to 1960s.” American Sociological Review 85 (2): 199–222. QualArchivalNoNo
Calarco, Jessica McCrory. 2020. “Avoiding Us versus Them: How Schools’ Dependence on Privileged ‘Helicopter’ Parents Influences Enforcement of Rules.” American Sociological Review 85 (2): 223–46. QualEthnography w/ surveyNoNo
Brewer, Alexandra, Melissa Osborne, Anna S. Mueller, Daniel M. O’Connor, Arjun Dayal, and Vineet M. Arora. 2020. “Who Gets the Benefit of the Doubt? Performance Evaluations, Medical Errors, and the Production of Gender Inequality in Emergency Medical Education.” American Sociological Review 85 (2): 247–70. MixedAdministrativeNoNo
Kristal, Tali, Yinon Cohen, and Edo Navot. 2020. “Workplace Compensation Practices and the Rise in Benefit Inequality ,  Workplace Compensation Practices and the Rise in Benefit Inequality.” American Sociological Review 85 (2): 271–97.QuantAdministrativeNoNo
Abascal, Maria. 2020. “Contraction as a Response to Group Threat: Demographic Decline and Whites’ Classification of People Who Are Ambiguously White.” American Sociological Review 85 (2): 298–322.QuantSurvey experimentNoNoPreanalysis plan registered. Data embargoed.
Friedman, Sam, and Aaron Reeves. 2020. “From Aristocratic to Ordinary: Shifting Modes of Elite Distinction.” American Sociological Review 85 (2): 323–50.QuantArchivalNoNo

New COVID-19 and Health Disparities lecture

I recorded a new version of the lecture I created last spring: COVID-19 and Health Disparities. It defines health disparities, introduces the theory of fundamental causes, and then describes COVID-19 disparities by race/ethnicity and age with reference to education and occupational inequality. For intro sociology students.

Using data from Bureau of Labor Statistics (inspired by this piece from Justin Fox), I showed the percentage of workers working at home according to the median wage in their occupations, illustrating how people in lower-paid occupations aren’t working at home, while professionals and managers are:

And, using age- and race/ethnic-specific mortality rates from CDC, with population denominators from the 2018 ACS (I don’t know why I can’t find the denominators CDC uses), I made this:

The greatest race/ethnic disparities are in the working ages, which suggests they are driven at least partly by occupational inequality.

The lecture 23 minutes, slides with references and links are here.

Demographic facts your students need to know right now (with COVID-19 addendum)

20200808-DSC_4900
PN Cohen photo / Flickr CC: https://flic.kr/p/2jw6stF

Here’s the 2020 update of a series I started in 2013. This year, after the basic facts, I’ll add some pandemic facts below.

Is it true that “facts are useless in an emergency“? I guess we’ll find out this year. Knowing basic demographic facts, and how to do arithmetic, lets us ballpark the claims we are exposed to all the time. The idea is to get your radar tuned to identify falsehoods as efficiently as possible, to prevent them spreading and contaminating reality. Although I grew up on “facts are lazy and facts are late,” I actually still believe in this mission, I just shake my head slowly while I ramble on about it (and tell the same stories over and over).

It started a few years ago with the idea that the undergraduate students in my class should know the size of the US population. Not to exaggerate the problem, but too many of them don’t, at least when they reach my sophomore level family sociology class. If you don’t know that fact, how can you interpret statements like, “The U.S. economy lost a record 20.5 million jobs in April“?

Everyone likes a number that appears to support their perspective. But that’s no way to run (or change) a society. The trick is to know the facts before you create or evaluate an argument, and for that you need some foundational demographic knowledge. This list of facts you should know is just a prompt to get started in that direction.

These are demographic facts you need just to get through the day without being grossly misled or misinformed — or, in the case of journalists or teachers or social scientists, not to allow your audience to be grossly misled or misinformed. Not trivia that makes a point or statistics that are shocking, but the non-sensational information you need to make sense of those things when other people use them. And it’s really a ballpark requirement (when I test the undergraduates, I give them credit if they are within 20% of the US population — that’s anywhere between 264 million and 396 million!).

This is only a few dozen facts, not exhaustive but they belong on any top-100 list. Feel free to add your facts in the comments (as per policy, first-time commenters are moderated). They are rounded to reasonable units for easy memorization. All refer to the US unless otherwise noted. Most of the links will take you to the latest data:

Number Source
World Population 7.7 billion 1
U.S. Population 330 million 1
Children under 18 as share of pop. 22% 2
Adults 65+ as share of pop. 17% 2
Official unemployment rate (July 2020) 10% 3
Unemployment rate range, 1970-2018 3.9% – 15% 3
Labor force participation rate, age 16+ 61% 9
Labor force participation rate range, 1970-2017 60% – 67% 9
Non-Hispanic Whites as share of pop. 60% 2
Blacks as share of pop. 13% 2
Hispanics as share of pop. 19% 2
Asians / Pacific Islanders as share of pop. 6% 2
American Indians as share of pop. 1% 2
Immigrants as share of pop 14% 2
Adults age 25+ with BA or higher 32% 2
Median household income $60,300 2
Total poverty rate 12% 8
Child poverty rate 16% 8
Poverty rate age 65+ 10% 8
Most populous country, China 1.4 billion 5
2nd most populous country, India 1.3 billion 5
3rd most populous country, USA 327 million 5
4th most populous country, Indonesia 261 million 5
5th most populous country, Brazil 207 million 5
U.S. male life expectancy at birth 76 6
U.S. female life expectancy at birth 81 6
Life expectancy range across countries 51 – 85 7
World total fertility rate 2.4 10
U.S. total fertility rate 1.7 10
Total fertility rate range across countries 1.0 – 6.9 10

Sources

1. U.S. Census Bureau Population Clock

2. U.S. Census Bureau quick facts

3. Bureau of Labor Statistics

5. CIA World Factbook

6. National Center for Health Statistics

7. CIA World Factbook

8. U.S. Census Bureau poverty tables

9. Bureau of Labor Statistics

10. World Bank


COVID-19 Addendum: 21 more facts

The pandemic is changing everything. A lot of the numbers above may look different next year. Here are 21 basic pandemic facts to keep in mind — again, the point is to get a sense of scale, to inform your consumption of the daily flow of information (and disinformation). These are changing, too, but they are current as of August 31, 2020.

Global confirmed COVID-19 cases: 25 million

Confirmed US COVID-19 cases: 6 million

Second most COVID-19 cases: Brazil, 3.9 million

Third most COVID-19 cases: India, 3.6 million

Global confirmed COVID-19 deaths: 850,000

Confirmed US COVID-19 deaths: 183,000

Second most COVID-19 deaths: Brazil, 121, 000

Third most COVID-19 deaths: India: 65,000

Percent of U.S. COVID patients who have died: 3%

COVID-19 deaths per 100,000 Americans: 50

COVID-19 deaths per 100,000 non-Hispanic Whites: 43

COVID-19 deaths per 100,000 Blacks: 81

COVID-19 deaths per 100,000 Hispanics: 55

COVID-19 deaths per 100,000 Americans over age 65: 400

Annual deaths in the U.S. (these are for 2017): Total, 2.8 million

Leading cause of death: Heart disease, 650,000

Second leading cause: Cancer: 600,000

Third leading cause: Accidents: 160,000

Deaths from flu and pneumonia, 56,000

Deaths from suicide: 47,000

Deaths from homicide: 20,000


Sources

COVID-19 country data: Johns Hopkins University Coronavirus Resource Center

U.S. cause of death data: Centers for Disease Control

U.S. age and race/ethnicity COVID-19 death data: Centers for Disease Control

 

 

Interview with Judith Stacey on the foundations of feminism in academia

0000366_judith-stacey_300

I had the privilege of interviewing Judith Stacey for the Annex Sociology Podcast.

Joseph Cohen asked me if I had any ideas for guest hosting the podcast, and this had been on my mind for a while — the cohort of women who brought feminism into academia in the 1960s and 1970s. In the ongoing conversations about the relationship between activism and sociology among early career scholars, we can learn a lot from this earlier generation. I have a little list of dream interviews in this vein — or something like an oral history project — and the podcast gave me a chance to explore it.

For my generation of gender researchers (whether we recognize it or not), the connections that she and others made between patriarchy and family structure were foundational. Most people today don’t realize how important research on China was to that development (see also Kay Ann Johnson and Ruth Sidel). In the U.S., this fed into the battles over welfare, welfare reform, and intersectionality in the U.S. And in academia, the formation of the Council on Contemporary Families, of which Stacey was a co-founder (which I have worked with as well).

Stacey already had a background teaching history in high school, and a masters degree in Black history, when she decided to switch to a PhD program in sociology, and immediately took on the world-historical question of patriarchy, feminism, and socialism, and traveled to China in the late 1970s. I said to her (lightly edited):

I want to just pause a little bit on this, just to — you know, one of the things I want to bring us around to is the discipline today, or feminism and academia today — but I just want to pause a little and just think about you as a graduate student in a time when sociology was about one-third of the people getting PhDs in the seventies were women in sociology, it’s a lot more now, over 60 percent. And the idea of, “I’m going to travel all the way around the world to a country where I can’t speak the language, that’s going through a tremendous revolutionary period” — I mean, you use the word ‘chutzpah’ to describe this, but I think it’s a certain kind of courage.

On the question of feminism and sociology, I asked, about her work in the 1980s:

So do you feel like, from that period and the momentum that you and your cohort brought into academia from the energy outside, when we look at the discipline of sociology now — is what we have now that we have established a feminist pole within the discipline, has the core of the discipline been changed, or has it just opened up to allow sort of a feminist section?

Her answers on this, and everything else, are super interesting and inspiring.

Here is some of Stacey’s writing, which I’ve been reading (and teaching) for about 30 years, that we reference in the interview: