Rural COVID-19 paper peer reviewed. OK?

Twelve days ago I posted my paper on the COVID-19 epidemic in rural US counties. I put it on the blog, and on the SocArXiv paper server. At this writing the blog post has been shared on Facebook 69 times, the paper has been downloaded 149 times, and tweeted about by a handful of people. No one has told me it’s wrong yet, but not one has formally endorsed it yet, either.

Until now, that is. The paper, which I then submitted to the European Journal of Environment and Public Health, has now been peer reviewed and accepted. I’ve updated the SocArXiv version to the journal page proofs. Satisfied?

It’s a good question. We’ll come back to it.

Preprints

The other day (I think, not good at counting days anymore) a group of scholars published — or should I say posted — a paper titled, “Preprinting a pandemic: the role of preprints in the COVID-19 pandemic,” which reported that there have already been 16,000 scientific articles published about COVID-19, of which 6,000 were posted on preprint servers. That is, they weren’t peer-reviewed before being shared with the research community and the public. Some of these preprints are great and important, some are wrong and terrible, some are pretty rough, and some just aren’t important. This figure from the paper shows the preprint explosion:

F1.large

All this rapid scientific response to a worldwide crisis is extremely heartening. You can see the little sliver that SocArXiv (which I direct) represents in all that — about 100 papers so far (this link takes you to a search for the covid-19 tag), on subjects ranging from political attitudes to mortality rates to traffic patterns, from many countries around the world. I’m thrilled to be contributing to that, and really enjoy my shifts on the moderation desk these days.

On the other hand some bad papers have gotten out there. Most notoriously, an erroneous paper comparing COVID-19 to HIV stoked conspiracy theories that the virus was deliberately created by evil scientists. It was quickly “withdrawn,” meaning no longer endorsed by the authors, but it remains available to read. More subtly, a study (by more famous researchers) done in Santa Clara County, California, claimed to find a very high rate of infection in the general population, implying COVID-19 has a very low death rate (good news!), but it was riddled with design and execution errors (oh well), and accusations of bias and corruption. And some others.

Less remarked upon has been the widespread reporting by major news organizations on preprints that aren’t as controversial but have become part of the knowledge base of the crisis. For example, the New York Times ran a report on this preprint on page 1, under the headline, “Lockdown Delays Cost at Least 36,000 Lives, Data Show” (which looks reasonable in my opinion, although the interpretation is debatable), and the Washington Post led with, “U.S. Deaths Soared in Early Weeks of Pandemic, Far Exceeding Number Attributed to Covid-19,” based on this preprint. These media organizations offer a kind of endorsement, too. How could you not find this credible?

postpreprint

Peer review

To help sort out the veracity or truthiness of rapid publications, the administrators of the bioRxiv and medRxiv preprint servers (who are working together) have added this disclaimer in red to the top of their pages:

Caution: Preprints are preliminary reports of work that have not been certified by peer review. They should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

That’s reasonable. You don’t want people jumping the gun on clinical decisions, or news reports. Unless they should, of course. And, on the other hand, lots of peer reviewed research is wrong, too. I’m not compiling examples of this, but you can always consult the Retraction Watch database, which, for example, lists 130 papers published in Elsevier journals in 2019 that have been retracted for reasons ranging from plagiarism to “fake peer review” to forged authorship to simple errors. The database lists a few peer-reviewed COVID-19 papers that have already been retracted as well.

This comparison suggests that the standard of truthiness cannot be down to the simple dichotomy of peer reviewed or not. We need signals, but they don’t have to be that crude. In real life, we use a variety of signals for credibility that help determine how much to trust a piece of research. These include:

  • The reputation of the authors (their degrees, awards, twitter following, media presence)
  • The institutions that employ them (everyone loves to refer to these when they are fancy universities reporting results they favor, e.g., “the Columbia study showed…”)
  • Who published it (a journal, an association, a book publisher), which implies a whole secondary layer of endorsements (e.g., the editor of the journal, the assumed expertise of the reviewers, the prestige or impact factor of the journal as a whole, etc.)
  • Perceived conflicts of interest among the authors or publishers
  • The transparency of the research (e.g., are the data and materials available for inspection and replication)
  • Informal endorsements, from, e.g., people we respect on social media, or people using the Plaudit button (which is great and you should definitely use if you’re a researcher)
  • And finally, of course, our own assessment of the quality of the work, if it’s something we believe ourselves qualified to assess

As with the debate over the SAT/GRE for admissions, the quiet indicators sometimes do a lot of the work. Call something a “Harvard study” or a “New York Times report,” and people don’t often pry into the details of the peer review process.

Analogy: People who want to eat only kosher food need something to go on in daily life, and so they have erected a set of institutional devices that deliver such a seal (in fact, there are competing seal brands, but they all offer the same service: a yes/no endorsement by an organization one decides to trust). The seals cost money, which is added to the cost of the food; if people like it, they’re willing to pay. But, as God would presumably tell you, the seal should not always substitute for your own good judgment because even rabbis or honest food producers can make mistakes. And in the absence of a good kosher inspection to rely on altogether, you still have to eat — you just have to reason things through to the best of your ability. (In a pinch, maybe follow the guy with the big hat and see what he eats.) Finally, crucially for the analogy, anyone who tells you to ignore the evidence before you and always trust the authority that’s selling the dichotomous indicator is probably serving their own interests as least as much as they’re serving yours.

In the case of peer review, giant corporations, major institutions, and millions of careers depend on people believing that peer review is what you need to decide what to trust. And they also happen to be selling peer review services.

My COVID-19 paper

So should you trust my paper? Looking back at our list, you can see that I have degrees and some minor awards, some previous publications, some twitter followers, and some journalists who trust me. I work at a public research university that has its own reputation to protect. I have no apparent way of profiting from you believing one thing or another about COVID-19 in rural areas (I declared no conflicts of interest on the SocArXiv submission form). I made my data and code available (even if no one checks it, the fact that it’s there should increase your confidence). And of course you can read it.

And then I submitted it to the European Journal of Environment and Public Health, which, after peer review, endorsed its quality and agreed to publish it. The journal is published by Veritas Publications in the UK with the support of Tsinghua University in China. It’s an open access journal that has been publishing for only three years. It’s not indexed by Web of Science or listed in the Directory of Open Access Journals. It is, in short, a low-status journal. On the plus side, it has an editorial board of real researchers, albeit mostly at lower status institutions. It publishes real papers, and (at least for now) it doesn’t charge authors any  publication fee, it does a little peer review, and it is fast. My paper was accepted in four days with essentially no revisions, after one reviewer read it (based on the summary, I believe they did read it). It’s open access, and I kept my copyright. I chose it partly because one of the papers I found on Google Scholar during my literature search was published there and it seemed OK.

So, now it’s peer reviewed.

Here’s a lesson: when you set a dichotomous standard like peer-reviewed yes/no and tell the public to trust it, you create the incentive for people to do the least they can to just barely get over that bar. This is why we have a giant industry of tens of thousands of academic journals producing products all branded as peer reviewed. Half a century ago, some academics declared themselves the gatekeepers of quality, and called their system peer review. To protect the authority of their expertise (and probably because they believed they knew best), they insisted it was the standard that mattered. But they couldn’t prevent other people from doing it, too. And so we have a constant struggle over what gets to be counted, and an effort to disqualify some journals with labels like “predatory,” even though it’s the billion-dollar corporations at the top of this system that are preying on us the most (along with lots of smaller scam purveyors).

In the case of my paper, I wouldn’t tell you to trust it much more because it’s in EJEPH, although I don’t think the journal is a scam. It’s just one indicator. But I can say it’s peer reviewed now and you can’t stop me.

Aside on service and reciprocity: Immediately after I submitted my paper, the EJEPH editors sent me a paper to review, which I respect. I declined because it wasn’t qualified, and then they sent me another. This assignment I accepted. The paper was definitely outside my areas of expertise, but it was a small study quite transparently done, in Nigeria. I was able to verify important details — like the relevance of the question asked (from cited literature), the nature of the study site (from Google maps and directories), the standards of measurement used (from other studies), the type of the instruments used (widely available), and the statistical analysis. I suggested some improvements to the contextualization of the write-up and recommended publication. I see no reason why this paper shouldn’t be published with the peer review seal of approval. If it turns out to be important, great. If not, fine. Like my paper, honestly. I have to say, it was a refreshing peer review experience on both ends.

Tone policing: Am I allowed to put Regnerus, Wilcox, and Hitler in the same headline?

3147786573_64841041cc_b
Sir, are you aware you were using a caustic tone back there? (photo: Thomas Hawk)

Nicholas Wolfinger reviewed my book Enduring Bonds for Social Forces (paywalled [why paywall book reviews?]; bootlegged). It would be unseemly of me to argue with a two-page book review instead of letting my life’s work stand on its own, so here goes — but just on one point: tone policing.

This is the opening of the review:

Philip Cohen has a lot of beefs. Hanna Rosen is an ”antifeminist” (p. 134) prone to “errors and distortions” (p. 146), and a “record of misstating facts in the service of inaccurate conclusions” (p. 185); W. Bradford Wilcox offers an “interpretation not just wrong but the opposite of right” (p. 76) and elsewhere gives a “racist” interview (p. 175); Ron Haskins, a “curmudgeon” (p. 175), presents a meme that’s “stupid and evil” (p. 47); David Blankenhorn is the author of a “deeply ridiculous” article (p. 80); Christina Hoff Sommers speaks in “[a] voice [that] drips with contempt” (p. 200) and is deemed to be an “antifeminist” (p. 155), even though she’s later identified as a
feminist (p. 197).*

He adds:

Also making the list: Paula England, for her “disappointingly mild” review of Cohen’s Public Enemy Number One, the “obtuse, semi-coherent” (p. 106) and “simply unethical” (p. 91) Mark Regnerus. Indeed, 29 of the 209 pages of Cohen’s book are spent excoriating Regnerus for two different studies.

This makes up his argument that, “Cohen writes so tendentiously that the useful bits get carried away in a torrent of ad hominem asperity,” and his conclusion, “you catch more flies with honey than with vinegar.”

Over my many years as a caustic person, I have heard this a lot, mostly from academics, bless their hearts. Which is cool, that’s my career choice and it would be unseemly to complain about it now, so here goes.

Listing the bad words I used doesn’t mean anything. And telling me I spent 29 pages on Regnerus (Wolfinger doesn’t mention that his frequent co-author, Brad Wilcox, is featured heavily in those 29 pages, or even that Wilcox is his frequent co-author), is not a meaningful critique unless you explain why these people don’t deserve it. I’ve heard, for example, that people have written very good whole books about specific individuals and the bad things they’ve done — including, off the top of my head, Hitler. The meaningful question is, am I wrong in those assessments, and if I am, why? In other words, you catch more flies by telling the reader why it would not be unacceptably harsg to write a whole book about Hitler but the same cannot be said about 29 pages on Regnerus and Wilcox. Or why it’s wrong to criticize Rosin, Haskins, Blankenhorn, Sommers and (lol) England in harsh terms.

If you want to enjoy a world where entire reviews are written about the use of harsh words, reviews that don’t even give a hint — not even a mention — as to the content of the issues and disputes that prompted those harsh words, then I can only suggest a career in academia.

Ironic aside

I tweeted a link to Wolfinger’s review, even though it is completely negative, because I’m scrupulous and fair-minded.

nwtweet

This led him to go on a multitweet journey, complaining that “he took words like ‘formidable’ out of context to suggest a much more positive review,” and exploring my motivations — responding to someone who said, “That was clearly a joke” with, “You see a joke, I see mendacity,” and concluding, “‘‘Just a joke’ is a weak, all-purpose way to cover up a fuck up like getting caught twisting the evidence.”

I hate to bring up Hitler again (not really), but the last time someone spent so much time pretending to not understand I was joking it was actual nazis, quoting a tweet where I joked that Jews were devoted to “eradicating whiteness and undermining its civilizations” (not linking, but you can google it). This led to a lot of online grief and some death threats, including posting my address on Reddit. So it irritated me.

The online nazi mob technique is to pretend things Jews say aren’t jokes, then pretend they themselves are joking when they talk about genocide. I’m sure many Jewish readers will recognize that failure to understand sarcastic humor is actually a common trait among rank-and-file anti-Semites — the people who have a hard time differentiating “New York” from “Jewish” — something that leading anti-Semites are very adept at manipulating. So that resonated with me.

(The above is labeled “aside” to make it boringly over-clear that I’m not saying Wolfinger is anti-Semitic.)


* Correction: Sommers is not “identified as a feminist” on p. 197, I just reported the name of her video series which is, absurdly, the The Factual Feminist.

 

Wilcox plagiarism denial and ethics review

Recently I made the serious accusation that Brad Wilcox and his colleagues plagiarized me in a New York Times op-ed. After the blog post, I sent a letter to the Times and got no response. And until now Wilcox had not responded. But now thanks to an errant group email I had the chance to poke him, and he responded, in relevant part:

You missed the point of the NYT op-ed, which was to stress the intriguing J-Curve in women’s marital happiness when you look at religion and gender ideology. We also thought it interesting to note there is a rather similar J-Curve in women’s marital happiness in the GSS when it comes to political ideology, although the political ideology story was somewhat closer to a U-Curve in the GSS. Our NYT argument was not inspired by you, and our extension of the argument to a widely used dataset is not plagiarism.

Most of that comment is irrelevant to the question of whether the figure they published was ripped off from my blog; the only argument he makes is to underline the word notTo help readers judge for themselves, here is the sequence again, maybe presented more clearly than I did it last time.

Wilcox and Nicholas Wolfinger published this, claiming Republicans have happier marriages:

marital-quality-fig-1

I responded by showing that that when you break out the categories more you get a U-shape instead:

marital-happiness-partyid.xlsx

Subsequently, I repeated the analysis, with newer data, using political views instead of party identification (the U-shape on the right):

hapmar16c

This is the scheme, and almost exactly the results, that Wilcox and colleagues then published in the NYT, now including one more year of data:

bwnyt

The data used, the control variables, and the results, are almost identical to analysis I did in response to their work. His response is, “Our NYT argument was not inspired by you.” So that’s that.

Ethics aside

Of course, only he knows what’s in his heart. But the premise of his plagiarism denial is an appeal to trust. So, do you trust him?

Lies

There is a long history here, and it’s hard to know where to start if you’re just joining. Wilcox has been a liberal villain since he took over the National Marriage Project and then organized what became (unfortunately) known as the Regnerus study (see below), and a conservative darling since the top administration at the University of Virginia overturned the recommendation of his department and dean to grant him tenure.

So here are some highlights, setting aside questions of research quality and sticking to ethical issues.

Wilcox led the coalition that raised $785,000, from several foundations, used to generate the paper published under Mark Regnerus’s name, intended to sway the courts against marriage equality. He helped design the study, and led the development of the media plan, and arranged for the paper to be submitted to Social Science Research, and then arranged for himself to be one of the anonymous peer reviewers. To do this, he lied to the editor, by omission, about his contribution the study — saying only that he “served on the advisory board.”

And then when the scandal blew up he lied about his role at the Witherspoon Institute, which provided most of the funding, saying he “never served as an officer or a staffer at the Witherspoon Institute, and I never had the authority to make funding or programmatic decisions at the Institute,” and that he was “not acting in an official Witherspoon capacity.” He was in fact the director of the institute’s Program on Family, Marriage, and Democracy, which funded the study, and the email record showed him approving budget requests and plans. To protect his reputation and cover up the lie, that position (which he described as “honorific”) has been scrubbed from his CV and the Witherspoon website. (In the emails uncovered later, the president of Witherspoon, Luis Tellez wrote, “we will include some money for you [Regnerus] and Brad on account of the time and effort you will be devoting to this,” but the amount he may have received has not been revealed — the grants aren’t on his CV.)

This is covered under the Regnerus and Wilcox tags on the blog, and told in gripping fashion in a chapter of my book, Enduring Bonds.

You might hold it against him that he organized a conspiracy to fight marriage equality, but even if you think that’s just partisan nitpickery, the fact that the research was the result of a “coalition” (their word) that included a network of right-wing activists, and that their roles were not disclosed in the publication, is facially an ethical violation. And the fact that it involved a series of public and private lies, which he has never acknowledged, goes to the issue of trust in every subsequent case.

Money

Here I can’t say what ethical rule Wilcox may have broken. Academia is a game that runs on trust, and in his financial dealings Wilcox has not been forthcoming. There is money flowing through his work, but the source and purpose that money is not disclosed when the work is published. For example, in the NYT piece Wilcox is identified only as a professor at the University of Virginia, even though the research reported there was published by the Institute for Family Studies. His faculty position, and tenure, are signals of his trustworthiness, which he uses to bolster the reputation of his partisan efforts.

The Institute for Family Studies is a non-profit organization that Wilcox created in 2009, originally called the Ridge Foundation. For the first four years the tax filings list him as the president, then director. Since 2013, when it changed its name to IFS, he has been listed as a senior fellow. Through 2017, the organization paid him more than $330,000, and he was the highest paid person. The funders are right-wing foundations.

Most academics want people to know about their grants and the support for their research. On his CV at the University of Virginia, however, Wilcox does not list the Institute for Family Studies in the “Employment” section, or include it among the grants he has received. Even though it is an organization he created and built up, so far grossing almost $3 million in total revenue. It is only mentioned in a section titled “Education Honors and Awards,” where he lists himself as a “Senior Fellow, Institute for Family Studies.” An education honor and award he gave himself, apparently.

He also doesn’t list his position on the Marco Rubio campaign’s Marriage & Family Advisory Board, where he was among those who “understand” that “Windsor and Obergefell are only the most recent example of our failure as a society to understand what marriage is and why it matters”

Wilcox uses his academic position to support and legitimize his partisan efforts, and his partisan work to produce work under his academic title (of course IFS says it’s nonpartisan but that’s meaningless). If he kept them really separate that would be one thing — we don’t need to know what church academics belong to or what campaigns they support, except as required by law — but if he’s going to blend them together I think he incurs an ethical disclosure obligation.

Wilcox isn’t the only person to scrub Withserspoon from his academic record — which is funny because the Witherspoon Institute is housed at Princeton University (where Wilcox got his PhD). And the fact of removing Witherspoon from a CV was used to discredit a different anti-marriage-equality academic expert, Joseph Price at Brigham Young, in the Michigan trial that led to the Obergefell decision, because it made it seem he was trying to hide his political motivations in testifying against marriage equality. Here is the exchange:

price-lie

Court proceedings are useful for bringing out certain principles. In this case I think they help illustrate my point: If Brad Wilcox wants people to trust his motivations, he should disclose the sources of support for his work.

Naomi Wolf and sharing our lanes

Bruce Stokes / https://flic.kr/p/dMG983

The other day, in response to the Naomi Wolf situation, I tweeted in response to Heather Souvaine Horn, an editor at the New Republic:

After which she invited my to submit an essay to the site. It’s now been published as: Learn the Right Lessons from Naomi Wolf’s Book Blunder: Expertise matters. But lane-policing is counterproductive.

I spent my semester as an MIT / CREOS Visiting Scholar and it was excellent

PNC in Cambridge in the fall.
Cambridge in the fall.

As a faculty sociologist who works in the area of family demography and inequality, my interest in open scholarship falls into the category of “service” among my academic obligations, essentially unrecognized and unremunerated by my employer, and competing with research and teaching responsibilities for my time. In that capacity I founded SocArXiv in 2016 (supported by several small grants) and serve as its director, organized two conferences at the University of Maryland under the title O3S: Open Scholarship for the Social Sciences, and I was elected to the Committee on Publications of the American Sociological Association. While continuing that work during a sabbatical leave, I was extremely fortunate to land a half-time position as visiting scholar at the MIT Libraries in the fall 2018, which helped me integrate that service agenda with an emerging research agenda around scholarly communication.

The position was sponsored by a group of libraries organized by the Association of Research Libraries — MIT, UCLA, the University of Arizona, Ohio State University, and the University of Pittsburgh — and hosted by the new Center for Research on Equitable and Open Scholarship (CREOS) at MIT. My principal collaborator has been Micah Altman, the director of research at CREOS.

The semester was framed by the MIT Grand Challenges Summit in the spring, which I attended, and the report that emerged from that meeting: A Grand Challenges-Based Research Agenda for Scholarly Communication and Information Science, on which I was a collaborator. The report, published in December, describes a vision for a more inclusive, open, equitable, and sustainable future for scholarship; it also characterizes the barriers to this future, and identifies the research needed to bring it to fruition.

Sociology and SocArXiv

Furthering my commitments to sociology and SocArXiv, I continued to work on the service. SocArXiv is growing, with increased participation in sociology and other social sciences. In the fall the Center for Open Science, our host, opened discussions with its paper serving communities about weaning the system off its core foundation financial support and using contributions from each service to make it sustainable (thus far have not paid COS for its develop and hosting). This was an expected challenge, which will require some creative and difficult work in the coming months.

Finally, at the start of the semester I noted that most sociologists — even those interested in open access issues — were not familiar with current patterns, trends, and debates in the scholarly communications ecosystem. This has hampered our efforts to build SocArXiv, as well as our ability to press our associations and institutions for policy changes in the direction of openness, equity, and sustainability. In response to this need, especially among graduate students and junior scholars, I drafted a scholarly communication primer for sociology, which reviews major scholarly communication media, policies, economic actors, and recent innovations. I posted a long draft (~13,000 words) for comment in January, and received a very positive response. It appears that a number of programs will incorporate the revised primer into their training, and many individuals are already reading and sharing it with their networks.

Peer review

One of the chief barriers identified in the Grand Challenges report is the lack of systematic theory and empirical evidence to design and guide legal, economic, policy and organizational interventions in scholarly publishing and in the knowledge ecosystem generally. As social scientists, Micah and I drew on this insight, and used the case of peer-review in sociology as an entry point. We presented our formative analysis of this case in the CREOS Research Talk, “Can Fix Peer Review.” Here is the summary of this talk:

Contemporary journal peer review is beset by a range of problems. These include (a) long delay times to publication, during which time research is inaccessible; (b) weak incentives to conduct reviews, resulting in high refusal rates as the pace of journal publication increases; (c) quality control problems that produce both errors of commission (accepting erroneous work) and omission (passing over important work, especially null findings); (d) unknown levels of bias, affecting both who is asked to perform peer review and how reviewers treat authors, and; (e) opacity in the process that impedes error correction and more systematic learning, and enables conflicts of interest to pass undetected. Proposed alternative practices attempt to address these concerns — especially open peer review, and post-publication peer review. However, systemic solutions will require revisiting the functions of peer review in its institutional context.

The full slides, with embedded video of the talk (minus the first few minutes) is embedded below:

Research design and intervention

Mapping out the various interventions and proposed alternatives in the peer review space raised a number of questions about how to design and evaluate interventions in a complex system with interdependent parts and actors embedded in different institutional logics — for example, university researchers (some working under state policy), research libraries, for-profit publishers, and academic societies. Working with Jessica Polka, Director of ASAPbio, we are expanding this analysis to consider a range of innovations open science. This analysis highlights the need for systematic research design that can guide the design of initiatives aimed at altering the scholarly knowledge ecosystem.

Applying the ecosystem approach in the Grand Challenges report, we consider large-scale interventions in public health and safety, and their unintended consequences, to build a model for designing projects with the intention of identifying and assessing such consequences across the system. Addressing problems at scale may have such unintended effects as leading vulnerable populations to adapt to new technology in harmful ways (mosquito nets used for fishing); providing new opportunities for harmful competitors (the pesticide treadmill); the displacement of private actors by public goods (dentists driven away by public water fluoridation); and risk compensation by those who receive public protection (anti-lock brakes and riskier driving, vaccinations). Our forthcoming white paper will address such risks in light of recent open science interventions: PLOS One, bioRxiv and preprints generally, and open peer review, among others. We combine research design methods for field experiments in social science, outcomes identified in the grand challenge report, and the ecosystem theory based on an open science lifecycle model.

ARL/SSRC meeting and Next Steps

Coming out of discussions at the first O3S meeting, in December the Association of Research Libraries and the Social Science Research Council convened a meeting on open scholarship in the social sciences, which included leaders from scholarly societies, university libraries, researchers advocating for open science, funders, and staff from ARL, SSRC, and the Coalition for Networked Information. I was fortunate to participate on the planning committee for the meeting, and in that capacity I conducted a series of short video interviews with individual stakeholders from the participating organizations to help expose us all to the range of values, objectives, and concerns we bring to the questions we collectively face in the movement toward open scholarship.

For our own work on peer review, which we presented at the meeting, I was especially drawn to the interviewees’ comments on transparency, incentives, and open infrastructure. In particular, MIT Libraries Director Chris Bourg challenged social scientists to recognize what their own research implies for the peer review system:

Brian Nosek, director of the Center for Open Science, stressed to the need to consider incentives for openness in our interventions:

And Kathleen Fitzpatrick, project director for Humanities Commons, described the necessity of open infrastructure that is flexibly interoperable, allowing parallel use by actors on diverse platforms:

These insights about intervention principles for an open scholarly ecosystem helped Micah and me develop a proposal for discussion at the meeting. Our proposed program, IOTA (I Owe The Academy) aims to solve the supply-and-demand problem for quality peer review in open science interventions (the name is likely to change). We understand that most academics are willing to do peer review when it contributes to a better system of scholarship. At the same time, new peer review projects need (good) reviewers in order to launch successfully. And the community needs (good) empirical research on the peer review process itself. The solution is to match reviewers with initiatives that promote better scholarship using a virtual token system, whereby reviewers pledge review effort units, which are distributed to open peer review projects — while collecting data for use in evaluation and assessment. After receiving positive feedback at the meeting, we will develop this proposal further.

Our presentation is embedded in full below:

A report on the ARL/SSRC meeting describes the shared interests, challenges to openness, and conditions for successful action discussed by participants. And it includes five specific projects they agreed to pursue — one of which is peer review on the SocArXiv and PsyArXiv paper platforms.

What’s next…

In the coming several months we expect to produce a white paper on research design, a proposal for IOTA, and a presentation for the Coalition for Networked Information meeting in April, to spark a discussion about the ways libraries can jointly support additional targeted work to promote, inspire, and support evidence-based research. And a revised version of the scholarly communication primer for sociology is on the way.

Gender segregated sociology today

This updates a series of posts that have addressed gender in academic sociology, starting in 2011 and updated in 2015, along with various tweets (to see random fact tweets from me on Twitter, Google familyunequal “now you know”).

Gender in academic sociology is complicated because the profession is running pretty female these days, with more than half the U.S. PhDs going to women since 1994, and more than 60% overall since 1999. So although there are various kinds of exclusion, it’s not as simple as excluding women from the discipline, and the representation of women’s representation depends on the choice of denominators. For example, a recent report found that, in the top 100 U.S. sociology departments in 2012, women were 60% of the assistant professors, 54% of the associate professors, and 34% of the full professors. This probably reflects a combination of age and tenure, with this year’s full professors representing yesteryear’s hiring, as well as women having lower rates of progression up the hierarchy.

Also, feminists (myself included) cheer the entry of women into formerly male-only professions but bemoan their concentration into female ghettos, but there is no bright line beyond which one process transforms into the other (don’t get me started on “tipping points“).

But however we want to interpret the trends, we have to know the trends. So here are some, starting with updates on previous reports (degrees, sections, elections), and then some new ones (journal articles, peer reviewers).*

Degrees

The National Science Foundation reports the number and gender of PhD recipients by discipline (since 2006, earlier). This is what we get (smoothed with three-year averages): mostly more than 60% female since the late-1990s, with women accounting for most of the growth.

sociology segregation.xlsx

Sections

Sociology is a very broad discipline, including people who specialize in many distinct substantive and methodological areas. Within the American Sociological Association (ASA), we divide into 49 sections, which serve as a mechanism to organize conferences and journals, and to give awards (people can belong to as many as they want, for a small charge). The sections are pretty segregated by gender. Here are the gender compositions of each, from Sex and Gender (86% female) to Mathematical Sociology (22%):

asa gender sections.xlsx

Elections

The ASA leadership is elected in annual balloting by the membership, which is open to anyone who wants to join as long as they claim some affiliation with the discipline (the price ranges from $51 for students to $377 for people with incomes over $150,000). The association elected its first president in 1906, its first woman in 1952 and its second woman in 1973. In the last 10 years 7 of the presidents have been women.

sociology segregation.xlsx

Is the shift toward women presidents because there are more women in the association, or in the hierarchy of the association, or because of the preference of the membership? The president, along with all the other elected officers, are selected for the ballot by a nominations committee. In recent years it has become conventional wisdom that men usually lose to women in these elections because ASA members vote for a woman if they don’t have a strong preference between candidates, but I don’t know how well-founded that perception is.

Here is the gender of candidates for top positions (president, vice president, secretary, and council members), and the gender of the winners, from 2007 to 2018. Note that in the last three years they have nominated fewer women, but except for 2016 the membership has voted for more women (with 2018 being having the widest gap yet):

sociology segregation.xlsx

Papers

I’ve only done a little of this, but here is a quick look at the gender of authors in two of the highest-status sociology journals for the last several years. American Sociological Review (an ASA journal published by Sage), and American Journal of Sociology (an independent journal published by University of Chicago), 35% of authors in the last 11 issues have been women (by my assessment, regular articles only)

asr-ajs-gender

Someone could easily do a much more serious assessment of gender in sociology journal authorship.

Reviewers

For the last few months I have been working on peer review in sociology and the social sciences — how it works, how it doesn’t, and how it might be improved. (Here are slides from a talk I gave, with Micah Altman at MIT). One of my concerns about peer review is its general lack of accountability; no one supervises the process, generally, as the only person who knows everything at a journal is the editor, and the only thing the public sees is the published outcome. And yet publication peer review determines all manner of statuses in academia.

Looking for externally accessible data that might shine a light on the process, I checked for reviewer acknowledgement lists, which some journals publish at the end of a volume (lots of journals don’t apparently, including Social Forces, Sociological Methods and Research, Social Science Research, Sociological Forum, Sociological Theory, Work & Occupations, Social Currents, and Mobilization.) I used the genderize.io API to count the genders of the reviewers, using 80% confidence for the first pass, and then personally checking or Googling other names (I didn’t do all the names, but almost).

referee gender.xlsx

The reviewer gender shares are a little higher for ASR and AJS than they were for authors, with the former having somewhat more women. Publication in one of these two journals is the probably most important gatekeeping mechanism to the upper echelons of the discipline. The methods journal has the lowest representation of women, the gender journal as the highest. Unknown here is the proportion of women among the pool of reviewers solicited by the editors.

So, that’s my report.

* These data all treat gender as sex as binary, either because the data were reported that way, or because I coded them from names. I don’t address race, ethnicity, or other traits for the same reason.

Amend your ASA/Sage author agreement!

open

This is a followup to a previous post, and contains some duplication.

I have spoken well of the policy that permits authors to post preprint versions of their papers before submitting them to journals of the American Sociological Association. That means you can get your work out more broadly while it’s going through the review process. The rule says:

ASA authors may post working versions of their papers on their personal web sites and non-peer-reviewed repositories. Such postings are not considered by ASA as previous publication.

The policy goes on to ask that authors modify their posted papers to acknowledge publication if they are subsequently published. That’s all reasonable. This is why SocArXiv and other services offer authors the opportunity to link their papers to the DOI (record locator) for the published version, should it become available. This allows citation aggregators such as Google Scholar to link the records.

The problem

Unfortunately, the good part of this policy is undermined by the ASA / Sage author agreement that authors sign when their paper is accepted. It transfers the copyright of the paper to ASA, and sets conditions under which authors can distribute the paper in the future. The key passage here is this:

1. Subject to the conditions in this paragraph, without further permission each Contributor may …

  • At any time, circulate or post on any repository or website, the version of the Contribution that Contributors submitted to the Journal (i.e. the version before peer-review) or an abstract of the Contribution.
  • No sooner than 12 months after initial publication, post on any non-commercial repository or website the version of the Contribution that was accepted for publication.

This is not good. It means that if you post a paper publicly, e.g., on SocArXiv, and then submit it to ASA, you can’t update it to the revised version as your paper moves through the process. Only 12 months after ASA publishes it can you update the preprint version to match the version that the journal approved.

This policy, if followed, would produce multiple bad outcomes.

One scenario is that people post papers publicly, and submit them to ASA journals for review. Over the course of the next year or so, the paper is substantially revised and eventually published, but the preprint version is not updated until a full year after that, often two years after the initial submission. That means readers don’t get to see the improved version, and authors have to live with people reading and sharing their unimproved work. This discourages people from sharing their papers in the first place.

In the other scenario, people update their preprints as the paper goes through the revision process, so they and their readers get the benefit of access to the latest work. However, when the paper is accepted authors are expected to remove from public view that revised paper, and only share the pre-review version. If this were feasible, it would be terrible for science and the public interest, as well as the author’s career interests. Of course, this isn’t really feasible — you can’t unring the bell of internet distribution (SocArXiv and other preprint services do not allow removing papers, which would corrupt the scholarly record.) This would also discourage people from sharing their papers in the first place.

The individual solution

Fortunately, you are a volitional agent in a free market information ecosystem, and you don’t have to just sign whatever PDF some corporate conglomerate puts in front of you. My suggestion is that you amend the agreement before you sign it. After receiving your acceptance, when the managing editor sends you the author agreement for your signature, politely notify the editor that you will be happy to sign the agreement with a minor amendment. Then strike through the offending text and add the amendment. I recommend the following text:

  • No sooner than 12 months after initial publication, post on any non-commercial repository or website the version of the Contribution that was accepted for publication.
  • At any time, post to SocArXiv (a non-commercial, open-access repository) the version of the Contribution that was accepted for publication, with a DOI link and bibliographic reference to the published Contribution.

Then sign the agreement and return it. Here’s a visual depiction of the amendment:

sage amendment

Don’t panic! Yes, this publication may be the last thing standing between you and tenure or a better job. But the journal will not cancel your publication when you do this. The very worst thing that will happen is they will say “No!” Then you can roll over and accept the original agreement. (After the dust settles, I’d love it if you let me know this happened.) People amend these agreements all the time. Give it a try!

Here’s the relevant passage in “Alice’s Restaurant” (@ 14:32)

And the only reason I’m singing you this song now is cause you may know somebody in a similar situation, or you may be in a similar situation,

And if you’re in a situation like that there’s only one thing you can do and that’s walk into The shrink wherever you are, just walk in say “Shrink, You can get anything you want, at Alice’s restaurant.” And walk out.

You know, if one person, just one person does it they may think he’s really sick and they won’t take him. And if two people, two people do it, in harmony – they may think they’re both faggots and they won’t take either of them. And three people do it, three, can you imagine, three people walking in singing a bar of Alice’s Restaurant and walking out. They may think it’s an organization. And can you, can you imagine fifty people a day, I said fifty people a day walking in singin a bar of Alice’s Restaurant and walking out.

And friends they may think it’s a movement And that’s what it is, the Alice’s Restaurant Anti-Massacree Movement, and all you got to do to join is sing it the next time it comes around on the guitar. With feeling.

Fix the policy

So, what possible reason can there be for this policy? It is clearly intended to punish the public in order to buttress the revenue stream of Sage, which returns some of its profits to ASA, at the expense of our libraries, which pay for subscriptions to ASA journals.

I assume this policy is never enforced, as I’ve never heard of it, but I don’t know that for a fact. It’s also possible that whoever wrote the Publications policy I linked above didn’t realize that it contradicted the Sage author agreement, which basically no one reads. I also assume that such a policy does not in fact have any effect on Sage’s profits, or the profits that it kick backs to ASA. So it’s probably useless, but if it has any effects at all they’re bad, by discouraging people from distributing their work. ASA should change this author agreement.

I just got elected to the ASA Publications Committee, so I will add making this change to my platform, which I outlined here. I’m not optimistic about making policy changes at ASA in the current environment, but I am sure that the more people who join in the individual efforts, the greater our chances will be.

Legal risks in reporting on academic sexual harassment

gossip
Gossip, according to Google Images

This is on the nuts and bolts of reporting sexual harassment.

Last fall my colleague Liana Sayer and I offered to help people report on sexual harassment in academic sociology (other posts on this: #MeToo). Although we have corresponded with a number of people, we have yet to make any public reports. One reason for that is legal risk.

The first advice I got from a number of people was to get a lawyer, and to get libel insurance. I did both of those things (libel turns out to be a kind of personal injury, like hitting someone in your car, so you can get covered for it under an umbrella policy).

After attending a media law conference (long story), and having gathered enough evidence to consider moving ahead with publication in one case, I spoke to several lawyers, and eventually retained Constance Pendleton, a media law expert and partner at Davis Wright Tremain. Here is some of what I learned from speaking with her.

First, if the case involves harassment within one workplace (school), it may be better to go through the official reporting procedure rather than making a public case, at least from the perspective of protecting the accuser. This involves lawyers and documents, which is good. However, for reasons I mentioned here, that often doesn’t work. And that process often ends with a promise of confidentiality that shields the harasser from public exposure (a key institutional goal of many university sexual harassment officers).

Second, the risk of getting sued as an individual is high. We don’t have a lot of experience in the current context with lawsuits against accusers, but the cases that have come forward have often involved major investigations by big organizations, not individuals publishing accusations on their blogs. So it’s hard to know how they will play. However,  even the cost of “easily” winning a case is likely to be a lot, something in five figures. And in the process, the accuser you are trying to protect could be forced to testify, or at least produce an affidavit, even if you have kept them anonymous in the story. Truth is a defense against libel, but if your true statement is “someone told me this,” you can still be found responsible if you can’t prove that what the person told you is true, or if it can be shown you acted maliciously in reporting it.

In the case of being sued, the things you need are the things a good journalist would want in reporting such a story, such as original documents, contemporaneous records, witnesses, and so on. There is a reason for that: journalists who report this stuff are heading off such lawsuits themselves. But I didn’t fully appreciate some key differences between a citizen journalist and a real news organization. These include the reputation of the news organization, which shields them (practically if not legally) from charges of acting maliciously. Also, they have lawyers already, so it doesn’t cost them as much to defend cases. And they have an interest in defending their reputation, so everyone knows they will fight. Finally, there are some legal protections for revealing information if you do it in the public interest, and that’s an easier case for news organizations to make. (This is my shallow, lay understanding of the situation, not legal advice).

Regardless of my thoughts on procedural fairness, which is hotly debated, these are reasons why I wouldn’t report on rumors alone, or report a case where I didn’t know the accuser’s identity and had no way of verifying the supporting information.

News reality

Given all this, The best thing might be for a news organization to report the story, rather than reporting it independently. I haven’t ruled out the latter course, but it’s much riskier. (And there may be hybrid solutions, such as writing a reported piece as a freelancer for a news organization.) Unfortunately, or maybe fortunately, news organizations that are interested in reporting on sexual harassment are getting bombarded with cases to report. They have to choose selectively from among these cases, and the variables involved are beyond my control.

In the case of Michael Kimmel, for example, reported by the Chronicle of Higher Education (paywalledbootlegged), the story includes one accuser who requested anonymity, and one senior sociologist who affirms the existence of rumors, and the charge is unwanted advances and demeaning comments. In this environment, that would not normally be enough to warrant a news story by a major publication, naming the accused. Not very much evidence and not such an egregious case (no reported threats, quid pro quo, or violence). That’s not an excuse, that’s a fact of the media landscape. The difference here is Kimmel is famous, and that he is “delaying” receiving a major award (plus it’s an award for being a feminist). If you brought the same facts and evidence to a news organization, but about a non-famous senior sociologist, you are unlikely to make it past editorial triage.

In summary, the very cases that I most want to expose — the common harassment that occurs between non-famous people all the time in academia — are difficult to work with. Risky for the citizen journalist, but maybe not important enough to jump the line at major news organizations. That said, I still favor public exposure as an approach in this environment, where policies remain weak and formal proceedings are unlikely to produce satisfactory results — but harassers and their employers are on the defensive and much of the public is watching and willing to get involved. And I still want to help. But it’s harder than I thought it would be. Live and learn.

Review essay: Public engagement and the influence imperative

publicsoccovers

I have written a review essay at the invitation of Contemporary Sociology. Here’s a preprint version on SocArXiv: https://osf.io/preprints/socarxiv/v27xk/.

This is the abstract. Feedback welcome!

Public engagement and the influence imperative

Abstract: A review essay discussing three advice books for social scientists. Sociologists, in responding to the imperative to make their work more influential, must go beyond doing “public sociology” to embrace doing sociology “in public” (Healy 2017). Rather than using public engagement primarily for publicity – to make our research matter – we should use engagement to help us do research that matters in the first place. Next, I caution that the drive to be professionally rewarded for public intellectualism is fraught with conflicts that may be irreconcilable. To be a public intellectual today requires being both public in one’s intellectual life and intellectual in one’s public life, and for academics in the era of the “market university” (Berman 2011), trying to get paid for that leads to a neoliberal trap. Finally, I argue for a move beyond personal strategies toward the development of the open scholarship as an institutional response that ultimately may be responsible for sociology’s survival.

Here is the SocArXiv citation:

Cohen, Philip N., 2018. “Public Engagement and the Influence Imperative”. SocArXiv. April 7. doi:10.17605/OSF.IO/V27XK.

Mark Regnerus to be promoted to full professor at UT Austin

regnerus-10-4-20131
Grainy hidden-camera photo by pnc.

Mark Regnerus, who has been an associate professor of sociology at the University of Texas at Austin since 2007, will be promoted to full professor, according to multiple sources with direct knowledge of the situation. That decision was made at the level of the central administration, overriding negative recommendations from both the Department of Sociology faculty and the College of Liberal Arts.

In terms of research productivity, Regnerus’s record is adequate for promotion at a leading research university. His early work was well-cited. His most recent book, Cheap Sex (the only one I’ve read) is atrocious (as I have written). But the real problem is ethics, and there the protocol is less clear. I previously wrote:

To get background on the story of the Regnerus Affair, you can read the chapter in my book [Enduring Bonds], or read the entire Regnerus thread on this blog, or read this 2015 recap, which is the latest long piece, with links to everything else. For purposes of this discussion, these conclusions are salient: he used crudely biased survey methods to gin up harms attributable to same-sex parenting, to help stop same-sex marriage in the courts, as part of a conspiracy with other right-wing academics (principally Brad Wilcox) and institutions (Heritage Foundation, Bradley Foundation, Witherspoon Institute), which included manipulating the peer review process to plant supporters on the panel and submitting the article for publication before the data collection was even complete, and then repeatedly lying about all that to cover up the conspiracy (including in the published work itself, where he falsely denied the involvement of the funders, and in an ethics proceeding by his university).

So what do we do with all this now? All that didn’t get him fired, and he still does research in the academic system. That is galling, because there is at least one really good, honest researcher who doesn’t have a tenure-track job today because Regnerus does. But that’s the system. Meanwhile life is long, people can change. In our weak system, however, which relies almost entirely on good will and honesty by researchers, reputation matters. With his reputation, you simply can’t take his word in the way that we (naively) do with regular researchers. I think there are two options, then, if we are to take the research seriously. The first is he could come clean, admit to what he did, and make an honest attempt to re-enter respectable academia. The other (non-exclusive) option is for him to make his research open and transparent, to subject it to scrutiny and verification, and let people see that he is behaving honestly and ethically now.

He has not yet done either of those things.

I would vote against his promotion based on this record. Maybe the internal documents will come out and allow us to debate this more fully, but to me it’s not a hard decision.

So I think it’s bad for the UT administration to override the faculty recommendation and impose the promotion for Regnerus. With the stroke of that pen, they commit the university — barring unplanned events — to several million dollars worth of salary and benefits for him for the next several decades. And thousands of students subjected to his teaching. That’s money that could be spent on much more valuable things, including honest, ethical sociologists.


 

Comments will be moderated for length, repetitiveness, and obnoxiousness.