Do we get tenure for this?

My photo. For the occasion I titled it, Openness. https://flic.kr/p/FShb6d
For the occasion I titled this photo of Utah “Openness.”

Colleen Flaherty at Inside Higher Ed has written up the American Sociological Association’s committee report, “What Counts? Evaluating Public Communication in Tenure and Promotion.”

I was once a member of the ASA Subcommittee on the Evaluation Of Social Media and Public Communication In Sociology, which was chaired by Leslie McCall when they produced the report. (It is a subcommittee of the task force on engaging sociology, convened by then-President Annette Lareau.)

It’s worth reading the whole article, which also includes comments from Sara Ovink, McCall and me, in addition to the report. Having thought about this issue a little, I was happy to respond to Flaherty’s request for comment. These are the full comments I sent her, from which she quoted in the article:

1. We don’t need credit toward promotion for every thing we do. Scholars who take a public-facing stance in their work often find that it enhances the quality and quantity of their work in the traditional fields of assessment (research, teaching, service), so that separately rewarding the public work is not always necessary. I don’t need credit for having a popular blog – that work has led to new research ideas, better feedback on my research, better grad students, teaching ideas, invitations to contribute to policy, and book contracts.

2. We’d all love to be promoted for authoring a great tweet but no one wants to be fired for a bad one. Assessment of public engagement needs to be holistic and qualitative, taking into the account quality, quantity, and impact of the work. Simplistic quantitative metrics will not be useful.

3. It is also important to value and reward openness in our routine work, such as posting working papers, publishing in open access journals, sharing replication files, and disseminating open teaching materials. Public engagement does not need to mean separate activities and products, but can mean taking a public-facing stance in our existing work.

The SocArxiv project is one outcome of these conversations (links to latest infosubmit a paper), especially relating to point #3 above. Academics who open up their work should be recognized for that contribution to the public good and for promoting the future of academia. In that spirit also I proposed a rule change for the ASA Dissertation Award, which now includes this:

To be eligible for the ASA Dissertation Award, candidates’ dissertations must be publicly available in Dissertation Abstracts International or a comparable outlet. Dissertations that are not available in this fashion will not be considered for the award.

It’s hard to change everything, but it’s not that hard to make some important changes in the right direction. Rewarding engagement and openness is an important step in the right direction.

SocArXiv in development

Print

Readers of the blog have become familiar with my complaints about our publishing system (scan the academia tag for examples): it’s needlessly slow, inefficient, hierarchical, profit-driven, exploitative, and also doesn’t work well.

Simple example: a junior scholar sends a perfectly reasonable sociology paper to a high-status journal. The editor commissions three anonymous reviews, and four months later the paper is rejected on the basis of a few hours of their volunteer labor. This increases the value — and subscription price — of the for-profit journal, because its high rejection rate is a key selling point. The author will now revise the paper (some of the advice was good, but nothing to suggest the analysis or conclusions were actually wrong) and send it to another journal, where three more anonymous reviewers — having no access to the previous round of review and exchange — will donate a few hours labor to a different for-profit publisher. In a few months we’ll find out what happens. Repeat. The outcome will be a good paper, improved by the process, published 1-3 years after it was written — during which time the paper, the code, and the data, were not available to anyone else. It will be available for $39.95 to non-academics, but most of the people who are aware of it will be able to read it because their institutions buy it as part of a giant bundle of journals from the publisher. The writer may get a job and, later, tenure. Thus, the process produces a good paper, inaccessible to most of the world, as well as a person dependent on the process, one with the institutional position and incentive to perpetuate it for another generation. There’s more wrong than this, but that’s the basic idea. The system is not completely non-functional, it’s just very bad.

With current technology, replacing our outdated journal system is not difficult. We could save vast amounts of money while providing free, faster access to research for everyone. Like our healthcare system, academic publishing is laboring under the weight of supporting its usurious middlemen. Getting them out of the way is a problem of politics and organization, not technology or cost. We academics do all the work already – research, writing, reviewing, editing – contributing our labor without compensation to giant companies that claim to be helping us get and keep our incredibly privileged jobs. But most of us are supported directly or indirectly by the state and our students (or their banks), not the journal publishers. We don’t need most of what the journal publishers do any more, and working for them is degrading our research, making it less innovative and transformative, less engaging and engaged, less open and accountable.

SocArXiv

The people in math and physics developed a workaround for this system in arXiv.org, where people share papers before they are peer-reviewed. Other paper servers have arisen as well, including some run by universities and some run privately for profit, some in specific disciplines. But there is a need for a new general, open-access, open-source, paper server for the social sciences, one that encourages linking and sharing data and code, that serves its research to an open metadata system, and that provides the foundation for a post-publication review system. I hope that SocArXiv will enable us to save research from the journal system. Once its built, anyone will be able to use it to organize their own peer-review community, to select and publish papers (though not exclusively), to review and comment on each other’s work — and to discover, cite, value, and share research unimpeded. We will be able to do this because of the brilliant efforts of the Center for Open Science (which is already developing a new preprint server) and SHARE (“a free, open, data set about research and scholarly activities across their life cycle”).

And we hope you’ll get involved: sharing research, reviewing, moderating, editing, mobilizing. Lots to do, but the good news is we’re doing most of this work already.

SocArXiv won’t take over this blog, though. You can read more about the project, and see the steering committee, in the announcement of our partnership. For updates, you can follow us on Twitter or Facebook, or email to add your name to the mailing list. In fact, you can also make a tax-deductible contribution to SocArXiv through the University of Maryland here.

When your paper is ready, check SocArXiv.org.

Perspective on sociology’s academic hierarchy and debate

2389844916_e9cc979eb9_o
Keep that gate. (Photo by Rob Nunn, https://flic.kr/p/4DbzCG)
It’s hard to describe the day I got my first acceptance to American Sociological Review. There was no social media back then so I have no record of my reaction, but I remember it as the day — actually, the moment, as the conditional acceptance slipped out of the fax machine — that I learned I was getting tenure, that I would have my dream job for the rest of my life, with a personal income in the top 10 percent of the country for a 9-month annual commitment. At that moment I was not inclined to dwell on the flaws in our publishing system, its arbitrary qualities, or the extreme status hierarchy it helps to construct.

In a recent year ASR considered more than 700 submitted articles and rejected 90% or more of them (depending on how you count). Although many people dispute the rationality of this distinction, publishing in our association’s flagship journal remains the most universally agreed-upon indicator of scholarship quality. And it is rare. I randomly sampled 50 full-time sociology faculty listed in the 2016 ASA Guide to Graduate Departments of Sociology (working in the U.S. and Canada), and found that 9, or 18%, had ever published a research article in ASR.

Not only is it rare, but publication in ASR is highly concentrated in high-status departments (and individuals). While many departments have no faculty that have published in ASR (I didn’t count these, but there are a lot), some departments are brimming with them. In my own, second-tier department, I count 16 out of 27 faculty with publications in ASR (59%), while at a top-tier, article-oriented department such as the University of North Carolina at Chapel Hill (where I used to work), 19 of the 25 regular faculty, or 76%, have published in ASR (many of them multiple times).

Without diminishing my own accomplishment (or that of my co-authors), or the privilege that got me here, I should be clear that I don’t think publication in high-status journals is a good way to identify and reward scholarly accomplishment and productivity. The reviews and publication decisions are too uneven (although obviously not completely uncorrelated with quality), and the limit on articles published is completely arbitrary in an era in which the print journal and its cost-determined page-limit is simply ridiculous.

We have a system that is hierarchical, exclusive, and often arbitrary — and the rewards it doles out are both large and highly concentrated.

I say all this to put in perspective the grief I have gotten for publicly criticizing an article published in ASR. In that post, I specifically did not invoke ethical violations or speculate on the motivations or non-public behavior of the authors, about whom I know nothing. I commented on the flaws in the product, not the process. And yet a number of academic critics responded vociferously to what they perceive as the threats this commentary posed to the academic careers and integrity of the authors whose work I discussed. Anonymous critics called my post “obnoxious, childish, time wasting, self promoting,” and urged sociologists to “shun” me. I have been accused of embarking on a “vigilante mission.” In private, a Jewish correspondent referred me to the injunction in Leviticus against malicious gossip in an implicit critique of my Jewish ethics.*

In the 2,500-word response I published on my site — immediately and unedited — I was accused of lacking “basic decency” for not giving the authors a chance to prepare a response before I posted the criticism on my blog. The “commonly accepted way” when “one scholar wishes to criticize the published work of another,” I was told, is to go through a process of submitting a “comment” to the journal that published the original work, which “solicits a response from the authors who are being criticized,” and it’s all published together, generally years later. (Never mind that journals have no obligation or particular inclination to publish such debates, as I have reported on previously, when ASR declined for reasons of “space” to publish a comment pointing out errors that were not disputed by the editors.)

This desire to maintain gatekeepers to police and moderate our discussion of public work is not only quaint, it is corrosive. Despite pointing out uncomfortable facts (which my rabbinical correspondent referred to as the “sin of true speech for wrongful purpose”), my criticism was polite, reasoned, with documentation — and within the bounds of what would be considered highly civil discourse in any arena other than academia, apparently. Why are the people whose intellectual work is most protected most afraid of intellectual criticism?

In Christian Smith’s book, The Sacred Project of American Sociology (reviewed here), which was terrible, he complains explicitly about the decline of academic civilization’s gatekeepers:

The Internet has created a whole new means by which the traditional double-blind peer-review system may be and already is in some ways, I believe, being undermined. I am referring here to the spate of new sociology blogs that have sprung up in recent years in which handfuls of sociologists publicly comment upon and often criticize published works in the discipline. The commentary published on these blogs operates outside of the gatekeeping systems of traditional peer review. All it takes to make that happen is for one or more scholars who want to amplify their opinions into the blogosphere to set up their own blogs and start writing.

Note he is complaining about people criticizing published work, yet believes such criticism undermines the blind peer-review system. This fear is not rational. The terror over public discussion and debate — perhaps especially among the high-status sociologists who happen to also be the current gatekeepers — probably goes a long way toward explaining our discipline’s pitiful response to the crisis of academic publishing. According to my (paywalled) edition of the Oxford English Dictionary, the definition of “publish” is “to make public.” And yet to hear these protests you would think the whisper of a public comment poses an existential threat to the very people who have built their entire profession around publishing (though, to be consistent, it’s mostly hidden from the public behind paywalls).

This same fear leads many academics to insist on anonymity even in normal civil debates over research and our profession. Of course there are risks, as there tend to be when people make important decisions about things that matter. But at some point, the fear of repression for expressing our views (which is legitimate in some rare circumstances) starts looking more like avoidance of the inconvenience or discomfort of having to stand behind our words. If academics are really going to lose their jobs for getting caught saying, “Hey, I think you were too harsh on that paper,” then we are definitely having the wrong argument.

“After all,” wrote Eran Shor, “this is not merely a matter of academic disagreements; people’s careers and reputations are at stake.” Of course, everyone wants to protect their reputation — and everyone’s reputation is always at stake. But let’s keep this in perspective. For those of us at or near the top of this prestige hierarchy — tenured faculty at research universities — damage to our reputations generally poses a threat only within a very narrow bound of extreme privilege. If my reputation were seriously damaged, I would certainly lose some of the perks of my job. But the penalty would also include a decline in students to advise, committees to serve on, and journals to edit — and no change in that lifetime job security with a top-10% salary for a 9-month commitment. Of course, for those of us whose research really is that important, anything that harms our ability to work in exactly the way that we want to has costs that simply cannot be measured. I wouldn’t know about that.

But if we want the high privilege of an academic career — and if we want a discipline that can survive under scrutiny from an increasingly impatient public and deepening market penetration — we’re going to have to be willing to defend it.

* I think if random Muslims have to denounce ISIS then Jews who cite Leviticus on morals should have to explain whether — despite the obvious ethical merit to some of those commands — they also support the killing of animals just because they have been raped by humans.

Eran Shor responds

On May 8 I wrote about three articles by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena. (My post is here; the articles, in chronological order, are available in full here, here, and here.)

Eran Shor, associate professor of sociology at McGill University and first author of the papers in question, has sent me the following response, which I agreed to post unedited. I have not heard from the other authors, and Shor does not claim to speak for them here. I’m not responding to this now, except to say that I stand by the original post. Feel free to make (moderated) comments below.

Eran Shor’s response

We would like to thank Philip N. Cohen for posting this response to his blog, unedited.

Philip N. Cohen wrote a post in which he targets three of our recently published articles and claims that these are overlapping and misleading to readers. On the one hand, Cohen clarifies that he is “not judging Shor et al. for any particular violation of specific rules or norms” and “not judging the quality of the work overall.” However, in his conclusions he speaks about “overlapping papers”, “selling duplicative information as new” and “misleading readers”. We feel this terminology more than just hints at intentional wrongdoing. The first response to the blog outright accuses us of self-plagiarism, deceitfulness, and having questionable ethics, which we believe is directly the result of Cohen’s suggestive language.

Below, we explain why we feel that these accusations are unfair and mostly unsubstantiated. We also reflect on the debate over open science and on the practice of writing blogs that make this kind of accusations without first giving authors the chance to respond to them.

As for the three articles in question, we invite readers to read these for themselves and judge whether each makes a unique and original contribution. To quickly summarize these contributions as we see them:

  • The first article, published in Journalism Studies (2014), focuses on the historical development of women’s media representation, presenting new data that goes back to the 19th Century and discussing the historical shifts and differences between various sections of the newspaper.
  • The second article, published in Social Science Quarterly (2014) begins to tackle possible explanations for the persistent gap in representation and specifically focuses on the question of political partisanship in the media and its relationship with gendered coverage patterns. We use two separate measures of newspapers’ political slant and conduct bivariate analyses that examine the association between partisanships and representation.
  • The third article was published in the American Sociological Review (2015). In it, we conducted a wide-scope examination of a large variety of possible explanations for the persistent gender gap in newspapers. We presented a large gamut of new and original data and analyses (both bivariate and multivariate), which examined explanations such as “real-world” inequalities, newsroom composition, and various other factors related to both the newspapers themselves and the cities and states in which they are located.

Of note, these three articles are the result of more than seven years of intensive data collection from a wide variety of sources and multiple analyses, leading to novel contributions to the literature. We felt (and still do) that these various contributions could not have been clearly fleshed out in one article, not even a longer article, such as the ones published by the American Sociological Review.

Now for the blog: Did SSQ really “scoop” ASR?

First, where we agree with Cohen’s critique: the need to indicate clearly when one is presenting a piece of data or a figure that already appeared in another paper. Here, we must concede that we have failed, although certainly not intentionally. The reason we dropped the ball on this one is the well-established need to try to conceal one’s identity as long as a paper is under review, in order to maintain the standard of double-blind review. Clearly, we should have been more careful in checking the final copies of the SSQ and ASR articles and add a clarification stating that the figure already appeared in an earlier paper. Alternatively, we could have also dropped the figure from the paper and simply refer readers to the JS paper, as this figure was not an essential component of either of the latter two papers. As for the issue of the missing year in the ASR paper, this was simply a matter of re-examining the data and noting that the data for 1982 was not strong enough, as it relied on too few data points and a smaller sample of newspapers, and therefore was not equivalent to data from the following years. We agree, however, that we should have clarified this in the paper.

That said, as Cohen also notes in his blog, in each of the two latter papers (SSQ and ASR), the figure in question is really just a minor descriptive element, the starting point for a much larger—and different in each article—set of data and analyses. In both cases, we present the figure at the beginning of the paper in order to motivate the research questions and subsequent analyses and we do not claim that this is one of our novel findings or contributions. The reason we reproduced this figure is that reviewers of previous versions of the paper asked us to demonstrate the persistent gap between women and men in the news. The figure serves as a parsimonious and relatively elegant visual way to do that, but we also presented data in the ASR paper from a much larger set of newspapers that establishes this point. Blind-review norms prevented us from referring readers to our own work in a clear way (that is, going beyond simply including a citation to the work). Still, as noted above, we take full responsibility for not making sure to add a clearer reference to the final version. But we would like to emphasize that there was no intentional deceit here, but rather a simple act of omission. This should be clear from the fact that we had nothing to gain from not referring to our previous work and to the fact that these previous findings motivated the new analyses.

Duplicated analyses?

As for the other major charge against us in the blog, it is summarized in the following paragraph:

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Here we feel that Cohen is presenting a false picture of what we did in the two papers. First, the SSQ paper actually used two measures of political slant. For both measures, we presented bivariate analyses of the relationship between slant and coverage (which indeed shows a weak to moderate relationship). It is important to stress here that this somewhat rudimentary analysis was simply a matter of data availability: The bivariate cross-sectional analysis was the best analysis we could perform at the time when the paper was accepted for publication (end of 2013), given the data that we had available. However, during the following two years, leading to the publication of the article in ASR at the end of 2015, we engaged in an extensive and time-consuming process of collecting and coding additional data. This effort allowed us to code longitudinal data on important characteristics of newspapers (e.g., identity of editors and publishers and various city and state-related characteristics) for a subset of the newspapers in our sample and for six consecutive years.

And so, as is often the case when moving from a cross-sectional bivariate analysis (SSQ) to a longitudinal multivariate one (ASR), the previous weak relationship that we found for slant basically disappeared, or rather, it became non-significant. In the ASR paper, we did refer readers to the results of our previous study, although without providing details about the analysis itself, because we did not wish to single out our own work in a paragraph that also briefly cites the results of other similar studies (Potter (1985); Adkins Covert and Wasburn (2007)). Perhaps we should have clarified better in the final draft that we previously examined this relationship in a cross-sectional bivariate analysis. But this is a far cry from the allegation that we were reproducing the same analysis, or that we intentionally concealed evidence, both of which are simply false.

To be clear: While the SSQ paper presented a cross-sectional bivariate analysis, in the ASR paper we used a somewhat different sample of papers to perform a longitudinal multivariate analysis, in which newspaper slant was but one of many variables. These are important differences (leading to different results), which we believe any careful reader of the two papers can easily detect. Readers of the ASR article will also notice that the testing of the political slant question is not a major point in the paper, nor is it presented as such. In fact, this variable was originally included as merely a control variable, but reviewers asked that we flesh out the theoretical logic behind including it, which we did. We therefore feel that Cohen’s above comment (in parentheses)—“In addition to whatever else is new in the third paper”—is unfair to the point of being disingenuous. It ignores the true intent of the paper and its (many) unique contributions, for the purpose of more effectively scoring his main point.

As for the paragraph that Cohen cites in the blog, in which we use very similar language to theoretically justify the inclusion of the newspaper slant variable in the ASR analysis, we would like to clarify that this was simply the most straightforward way of conveying this theoretical outline. We make no pretense or implication whatsoever that this passage adds anything new or very important in its second iteration. And again, we do refer the readers to the previous work (including our own), which found conflicting evidence regarding this question in cross sectional bivariate analyses. Knowledge is advanced by building on previous work (your own and others) and adding to it, and this is exactly what we do here.

Misleading readers and selling duplicated information as new?

Given our clarifications above, we feel that the main charges against us are unjust. The three papers in question are by no means overlapping duplications (although the one particular descriptive figure is). In fact, none of the analyses in SSQ and ASR are overlapping, and each paper made a unique contribution at the time it was published. Furthermore, the charges that we are “selling duplicate information as new” and “misleading readers” clearly imply that we have been duplicitous and dishonest in this research effort. It is not surprising that such inflammatory language ended up inciting respondents to the blog to accuse us flatly of self-plagiarism, deceitfulness, and questionable ethics.

In response to such accusations, we once again wish to state very clearly that at no point did we intend to deceive readers or intentionally omit information about previous publications. While we admit to erring in not clearly mentioning in the caption of the figure that it was already reported in a previous study, this was an honest mistake, and one which did not and could not be to our benefit in any way whatsoever. An error of omission it was, but not a violation of any ethical norms.

Is open access the solution?

We would also like to comment on the more general claim of the blog about the system being broken and the solution lying in open access briefly. We would actually like to express our firm support for Cohen’s general efforts to promote open science. We also agree with the need to both carefully monitor and rethink our publication system, as well as with the call for open access to journals and a more transparent reviewing process. All of these would bring important benefits and the conversation over them should continue.

However, we question the assumption that in this particular case an open access system would have solved the problem of not mentioning that our figure appeared in previous articles. As we note above, this omission was actually triggered by the blind review system and our attempts to avoid revealing our identity during the reviewing process (and later on to our failure to remember to add more direct references to our previous work in the final version of the article). But surely, most reviewers who work at academic institutions have access through their local libraries to a broad range of journals, including ones that are behind paywalls (and certainly to most mainstream journals). Our ASR paper was reviewed by nine different anonymous reviewers, as well as by the editorial board. It seems reasonable to assume that virtually all of them would have been able to access the previous papers published in mainstream journals. So the fact that our previous articles were published in journals with paywalls seems neither here nor there for the issues Cohen raises about our work.

A final word: On the practice of making personal accusations in a blog without first soliciting a response from the authors

The commonly accepted way to proceed in our field is that when one scholar wishes to criticize the published work of another, they write a comment to the journal that published the article. The journal then solicits a response from the authors who are being criticized, and a third party then decides whether this is worthy of publication. Readers then get the chance to read both the critique and the response at the same time and decide which point of view they find more convincing. It seems to us that this is the decent way of proceeding in cases where there are different points of view or disagreement over scholarly findings and interpretations. Moreover, when the critique involves charges (or hints) of unethical behavior and academic dishonesty. In such cases, this norm of basic decency seems to us to be even more important. In our case, Professor Cohen did not bother to approach us, and did not ask us to respond to the accusations against us. In fact, we only learned about the post by happenstance, when a friend directed our attention to the blog. When we responded and asked Cohen to make some clarifications to the original posting, we were turned down, although Cohen kindly agreed to publish this comment to his blog, unedited, for which we are thankful.

Of course, online blogs are not expected to honor the norms of civilized scholarly debate to the letter. They are a different kind of forum. Clearly, they have their advantages in terms of both speed and accessibility, and they form an important part of the current academic discourse. But it seems to us that, especially in such cases where allegations of ethically questionable conduct are being made, the authors of blogs should adopt a more careful approach. After all, this is not merely a matter of academic disagreements; people’s careers and reputations are at stake. We would like to suggest that in such cases the authors of blogs should err on the side of caution and allow authors to defend themselves against accusations in advance and not after the fact, when much of the damage has already been done.

 

How broken is our system (hit me with that figure again edition)

Why do sociologists publish in academic journals? Sometimes it seems improbable that the main goal is sharing information and advancing scientific knowledge. Today’s example of our broken system, brought to my attention by Neal Caren, concerns three papers by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena (Shor et al).

May 13, 2016 update: Eran Shor has sent me a response, which I posted here.

In a paywalled 2013 paper in Journalism Studies, the team used an analysis of names appearing in newspapers to report the gender composition of people mentioned. They analyzed the New York Times back to 1880, and then a larger sample of 13 newspapers from 1982 through 2005. Here’s one of their figures:

shor1

The 2013 paper was a descriptive analysis, establishing that men are mentioned more than women over time.

In a paywalled 2014 article in Social Science Quarterly (SSQ) the team followed up. Except for a string-cite mention in the methods section, the second paper makes no reference to the first, giving no indication that the two are part of a developing project. They use this figure to motivate the analysis in the second paper, with no acknowledgment that it also appeared in the first:

shor2

Shor et al. 2014 asked,

How can we account for the consistency of these disparities? One possible factor that may explain at least some of these consistent gaps may be the political agendas and choices of specific newspapers.

Their hypothesis was:

H1: Newspapers that are typically classified as more liberal will exhibit a higher rate of female-subjects’ coverage than newspapers typically classified as conservative.

After analyzing the data, they concluded:

The proposition that liberal newspapers will be more likely to cover female subjects was not supported by our findings. In fact, we found a weak to moderate relationship between the two variables, but this relationship is in the opposite direction: Newspapers recognized (or ranked) as more “conservative” were more likely to cover female subjects than their more “liberal” counterparts, especially in articles reporting on sports.

They offered several caveats about this finding, including that the measure of political slant used is “somewhat crude.”

Clearly, much more work to be done. The next piece of the project was a 2015 article in American Sociological Review (which, as the featured article of the issue, was not paywalled by Sage). Again, without mentioning that the figure has been previously published, and with one passing reference to each of the previous papers, they motivated the analysis with the figure:

shor3

Besides not getting the figure in color, ASR readers for some reason also don’t get 1982 in the data. (The paper makes no mention of the difference in period covered, which makes sense because it never mentions any connection to the analysis in the previous paper). The ASR paper asks of this figure, “How can we account for the persistence of this disparity?”

By now I bet you’re thinking, “One way to account for this disparity is to consider the effects of political slant.” Good idea. In fact, in the depiction of the ASR paper, the rationale for this question has hardly changed at all since the SSQ paper. Here are the two passages justifying the question.

From SSQ:

Former anecdotal evidence on the relationship between newspapers’ political slant and their rate of female-subjects coverage has been inconclusive. … [describing studies by Potter (1985) and Adkins Covert and Wasburn (2007)]…

Notwithstanding these anecdotal findings, there are a number of reasons to believe that more conservative outlets would be less likely to cover female subjects and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s rights issues in a relatively negative light (Baker Beck, 1998; Brescoll and LaFrance, 2004). Therefore, they may be less likely to devote coverage to these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors […]. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally (that is, conservatively) considered to be more important or interesting, such as politics, business, and sports, and less likely to report on issues such as social welfare, education, or fashion, where according to research women have a stronger presence (Holland, 1998; Ross, 2007, 2009; Ross and Carter, 2011).

From ASR:

Some work suggests that conservative newspapers may cover women less (Potter 1985), but other studies report the opposite tendency (Adkins Covert and Wasburn 2007; Shor et al. 2014a).

Notwithstanding these inconclusive findings, there are several reasons to believe that more conservative outlets will be less likely to cover women and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s issues in a relatively negative light (Baker Beck 1998; Brescoll and LaFrance 2004), making them potentially less likely to cover these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally considered more important or interesting, such as politics, business, and sports, rather than reporting on issues such as social welfare, education, or fashion, where women have a stronger presence.

Except for a passing mention among the “other studies,” there is no connection to the previous analysis. The ASR hypothesis is:

Conservative newspapers will dedicate a smaller portion of their coverage to females.

On this question in the ASR paper, they conclude:

our analysis shows no significant relationship between newspaper coverage patterns and … a newspaper’s political tendencies.

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Still love your system?

It’s fine to report the same findings in different venues and formats. It’s fine, that is, as long as it’s clear they’re not original in the subsequent tellings. (I personally have been known to regale my students, and family members, with the same stories over and over, but I try to remember to say, “Stop me if I already told you this one” first.)

I’m not judging Shor et al. for any particular violation of specific rules or norms. And I’m not judging the quality of the work overall. But I will just make the obvious observation that this way of presenting ongoing research is wasteful of resources, misleading to readers, and hinders the development of research.

  • Wasteful because reviewers, editors, and publishers, are essentially duplicating their efforts to try to figure out what is actually to be learned from these overlapping papers — and then to repackage and sell the duplicative information as new.
  • Misleading to readers because we now have “many studies” that show the same thing (or different things), without the clear acknowledgment that they use the same data.
  • And hindering research because of the wasteful delays and duplicative expenses involved in publishing research that should be clearly presented in cumulative, transparent fashion, in a timely way — which is what we need to move science forward.

Open science

When making (or hearing) arguments against open science as impractical or unreasonable, just weigh the wastefulness, misleadingness, and obstacles to science so prevalent in the current system against whatever advantages you think it holds. We can’t have a reasonable conversation about our publishing system based on the presumption that it’s working well now.

In an open science system researchers publish their work openly (and free) with open links between different parts of the project. For example, researchers might publish one good justification for a hypothesis, with several separate analyses testing it, making clear what’s different in each test. Reviewers and readers could see the whole series. Other researchers would have access to the materials necessary for replication and extension of the work. People are judged for hiring and promotion according to the actual quality and quantity of their work and the contribution it makes to advancing knowledge, rather than through arbitrary counts of “publications” in private, paywalled journals. (The non-profit Center for Open Science is building a system like this now, and offers a free Open Science Framework, “A scholarly commons to connect the entire research cycle.”)

There are challenges to building this new system, of course, but any assessment of those challenges needs to be clear-eyed about the ridiculousness of the system we’re working under now.

Previous related posts have covered very similar publications, the opposition to open access, journal self-citation practices, and one publication’s saga.

For (not against) a better publishing model

I was unhappy to see this piece on the American Sociological Association (ASA) blog by Karen Edwards, the director of publications and membership.

The post is about Sci-Hub, the international knowledge-stealing ring that allows anyone to download virtually any paywalled academic paper for free. (I wrote about it, with description of how it’s used, here.) Without naming me or linking to the post, Edwards takes issue with pieces like mine. She writes:

ASA, other scholarly societies, and our publishing partners have been dismayed by some of the published comments about Sci-Hub that present its theft as a kind of “Robin Hood” fairy tale by characterizing the “victims” as greedy publishers feasting on the profits of expensive individual article downloads by needy researchers.

My first objection is, “ASA … have been dismayed.” There have been many debates about who speaks for ASA, especially when the association took positions on legal issues (their amicus briefs are here). And I’m sure the ASA executives send out letters all the time saying, ASA thinks this or that. But when it’s about policy issues like this post (and when I don’t agree), then I think it’s wrong without some actual process involving the membership. The more extreme case, on this same issue, was when the executive officer, Sally Hillsman, sent this letter to the White House Office of Science and Technology Policy objecting to the federal government’s move toward open access — which most of us only found out about because Fabio Rojas posted it on OrgTheory.

My second objection is to the position taken. In Edwards’ view, the existence of Sci-Hub, “threatens the well-being of ASA and our sister associations as well as the peer assessment of scholarship in sociology and other academic disciplines.”

Because, in her opinion, without paywalls — and Sci-Hub presumably threatens to literally end paywalls — the system of peer reviewed scholarly output would literally die. As I pointed out in my original piece, if your entire enterprise can be brought down by the insertion of 11 characters into a URL, your system may in fact not be sustainable. Rather than attack Sci-Hub and its users, “ASA” might ask why its vendor is so unable to prevent the complete demolition of its business model by a few key strokes. But they don’t. Which leads me to the next point.

The Edwards post goes way beyond the untrue claim that there is no other way to support a peer review system, and argues that ASA needs all that paywall money to pay for all the other stuff it does. That is, not only do we need to sell papers to pay for our journal operations (and Sage profits), we also need paywalls because:

ASA is a nonprofit, so whatever revenue we receive from our journals, beyond what it costs us to do the editorial and publications work, goes directly into providing professional and educational services to our members and other scholars in our discipline (whether they are members or not). … The revenue allows ASA to provide sociologists in the field competitive research grants, pre-doctoral scholarships, specialized career development, and new digital teaching resources among many other services. It is what allows us to work effectively with other social science associations to sustain and, hopefully, grow the flow of federal research dollars to the social sciences through NSF, NIH, and many others and to defend against elimination and cuts to federal support (e.g., statistical systems and ongoing surveys) so scholars can conduct research and then publish outstanding scholarship.

In other words, as David Mamet’s character Mickey Bergman once put it, “Everybody needs money. That’s why they call it money.”

This means that finding the best model for getting sociological research to the most people with the least barriers is not as important as all the other stuff ASA does — even if the research is publicly funded. I don’t agree.

Better models

There are better ways. Contrary to popular misconceptions, we do not need to go to a system where individual researchers pay to publish their work, widening status inequalities among researchers. The basic design of the system to come is we cut out the for-profit publishers, and ask the universities and federal agencies that currently pay for research twice — once for the researchers, and once again for their published output — to agree to pay less in exchange for all of it to be open access. Instead, they pay into a central organization that administers publication funds to scholarly associations, which produce open-access research output. For a detailed proposal, read this white paper from K|N Consultants, “A Scalable and Sustainable Approach to Open Access Publishing and Archiving for Humanities and Social Sciences.”

This should be easy — more access, accountability, and efficiency, for less — but it’s a difficult political problem, made all the more difficult by the dedicated efforts of those whose interests are threatened by the possibility of slicing out the profit (and other surplus) portions of the current paywall system. The math is there, but the will and the organizational efforts are lagging badly, especially in the leadership of ASA.

How to steal 80 million paywalled papers

5747629074_d484394fa5_b
From Flickr CC / https://flic.kr/p/9KU6T1

2018 note: Title updated to reflect estimates from early 2017

I’m not a criminal mastermind, so I could be wrong, but I can’t think of a way to steal more — defined by list price — per unit of effort than using sci-hub to access paywalled papers I don’t have legitimate subscription access to read.

I don’t know the scientific term for this, but there has to be some way to describe the brokenness of a system based on the ratio of effort expended to damage done. For example, there are systems where even large effort is unlikely to cause serious harm (the US nuclear weapons system), versus those where minor efforts succeed but cause acceptable levels of harm (retail shoplifting). And then there is intellectual property, where small investments can inflict billions of dollars worth of damage.

Syed Ali and I have a short piece with some links to the news on sci-hub at Contexts. Alexandra Elbakyan says it took her about three days to develop the system that now gives anyone in the world access to tens of millions of paywalled articles (almost all paywalled articles). Of course, a lot of people help a little, by providing her with access information from their subscribing universities, but it still seems like a very low ratio of criminal energy to face-value payoff.

Example

2018 note: The example of how to use Sci-hub that I originally had in this post is obsolete now so I removed it. To make Sci-hub work now you just have to find a mirror that isn’t blocked by your Internet service provide, such as sci-hub.tw for me, and paste the URL or DOI from the article you want them to steal for you into the search box.

Don’t ask me how it really works, but basically it checks if the article has been requested before — in which case it’s cached somewhere — and if it hasn’t been requested before it uses fake login information to go get it, and then it stores the copy somewhere for faster retrieval for the next person. That’s why your stolen PDF may have a little tag at the bottom that says something like “Downloaded from journal.publisher.com at Unfamous University on Recent Date.” If the article comes up instantly, you didn’t really steal it, you’re just looking at a stolen copy; if you have to watch the little thing spin first then it’s being stolen for you. With this incredibly smart design the system grows by itself, according to demand from the criminal reading public.

What’s the punishment?

I have no idea what risk Alexandra Elbakyan or her compatriots face for their work. I don’t imagine the penalty for any given user is greater than the penalty for shoplifting a $39.95 bottle of Awesome Wasteproduct. And for me sharing this, I would expect the worst thing that would happen would be a stern letter on legal letterhead. But maybe I’m naive.

Anyway, the point is, it says something about the soundness of the academic publishing edifice that doing this much damage to it is this easy.

What are the ethics?

I am aware that some reasonable people think sci-hub is very wrong, while others think the current system is very wrong. I know that many people’s current paychecks depend on this system continuing to malfunction as it does, while others never earn the higher incomes they otherwise could because they can’t get paywalled articles. I understand corporate journals add some value through their investments. And I know that the current system denies many people access to a lot of information, with social costs that are unquantifiable. And there is some inherent value to not breaking the law just in general, while there is also value to breaking bad laws symbolically. How you balance all those factors is up to you.

Some people think it’s even wrong to discuss this. What does that tell you?

Basic self promotion

your work

If you don’t care enough to promote your research, how can you expect others to?

These are some basic thoughts for academics promoting their research. You don’t have to be a full-time self-promoter to improve your reach and impact, but the options are daunting and I often hear people say they don’t have time to do things like run a Twitter account or write blogs. Even a relatively small effort, if well directed, can help a lot. Don’t let the perfect be the enemy of the good. It’s fine to do some things pretty well even if you can’t do everything to your ideal standard.

It’s all about making your research better — better quality, better impact. You want more people to read and appreciate your work, not just because you want fame and fortune, but because that’s what the work is for. I welcome your comments and suggestions below. 

Present yourself

Make a decent personal website and keep it up to date with information about your research, including links to accessible copies of your publications (see below). It doesn’t have to be fancy (I have a vested interest in keeping standards low in that department). I’m often surprised at how man  people are sitting behind years-old websites. 

Very often people who come across your research somewhere else will want to know more about you before they share, report on, or even cite it. Your website gives your work more credibility. Has this person published other work in this area? Taught related courses? Gotten grants? These are things people look for. It’s not vain or obnoxious to present this information, it’s your job. I recommend a good quality photo (others disagree).

Make your work available

Let people read the actual research. Publishing in open-access journals is ideal, because it’s the right thing to do and more people can read it. (My recent article in Sociological Science was downloaded several hundred times within 10 days, which is much more than I would expect from a paywalled journal.)

Whether or not you do that, share your working paper or preprint versions. This is best done in your university repository (ask your library) or public disciplinary archive. (For prominent examples, check out the University of California’s has eScholarship or Harvard’s DASH; I use the working paper site of the Maryland Population Research Center, which is run by UCLA.) If you put them on your own university website, that will allow them to show up in web searches (including Google Scholar), but they won’t be properly tagged and indexed for things like citation or grant analysis, or archived — so it’s better just to put links on your website. But don’t just link to the pay-walled version, that’s the click of death for someone just browsing around. 

Don’t be intimidated by copyright. You can almost always put up a preprint without violating any agreement (ideally you wouldn’t publish anywhere that makes you take it down afterwards), and even if you have to take it down eventually you get months or years to share it first. No one will sue you or fire you — the worst outcome is being asked to take it down, which is very rare. Don’t prioritize protecting the journal’s proprietary right to promotion over serving the public (and your career) by getting the research out there, as soon as it’s ready. To see the policies of different journals regarding self-archiving, check out the simple database at SHERPA/RoMEO.

I oppose private sites like Academia.edu and ResearchGate. These are just private companies doing what your university and its library are already doing for the public. Your paper will not be discovered more if it is on one of these sites. It will show up in a Google search if you put it on your website or, better, in a public repository.

I’m not an open access purist, at least for sociology. If you got public money to develop a cure for cancer, that’s different. For us, not everything has to be open access (books, for example), but the more it is the better, especially original research. Anyway, it would be great if sociology got more into open science (for example, with the Open Science Framework). People for whom code is big already use sites like GitHub for sharing, which is beyond me; in your neck of the woods that can be great for getting your work out, too.

Share your work

In the old days we used to order paper reprints of papers we published and literally mail them to the famous and important people we hoped would read and cite them. Nowadays you can email them a PDF. Sending a short note that says, “I thought you might be interested in this paper I wrote” is normal, reasonable, and may be considered flattering. (As long as you don’t follow up with repeated emails asking if they’ve read it yet.)

Social media

I recommend at least basic social media, Twitter and Facebook. This does not require a massive time commitment — you can always ignore them. Setting up a public profile on Twitter or a page on Facebook gives people who do use them all the time a way to link to you and share your profile. If someone wants to show their friends one of my papers on Twitter, this doesn’t require any effort on my part. They tweet, “Look at this awesome new paper @familyunequal wrote!” When people click on the link they go to my profile, which tells them who I am and links to my website. I do not have to spend time on Twitter for this to work. (I chose @familyunequal because familyinequality was too long and I didn’t want to use my name because I was determined not to use Twitter for personal stuff. I think something closest to your name is ideal, but don’t not do this because you can’t think of the perfect handle.)

Of course, an active social media presence does help draw people into your work. But even low-level attention will help: posting or tweeting links to new papers, conference presentations, other writing, etc. No need to get into snarky chitchat and following hundreds of people if you don’t want to.

To see how others are using Twitter, you can visit the list I maintain, which has more than 600 sociologists. This is useful for comparing profile and feed styles.

Other writing

People who write popular books go on book tours to promote them. People who write minor articles in sociology might send out some tweets, or share them with their friends on Facebook. In between are lots of other places you can write something to help people find and learn about your work. I recommend blogging, but that can be done different ways.

As with publications themselves, there are public and private options, and I’m not a purist. (Some of my blog posts at the Atlantic, for which I used to get paid a little, were literally sponsored by Exxon, which I didn’t notice at first because I only looked at the site with my ad-blocker on.) But again public usually works better in addition to feeling better.

There are some good organizations now that help people get their work out. In my area, for example, the Council on Contemporary Families is great (I’m on their board), producing research briefs related to new publications, and helping to bring them to the attention of journalists and editors. Others work with the Scholars Strategy Network, which helps people place Op-Eds, or others. The great non-profit site The Society Pages includes lots of avenues for writing about your research. In addition, there are blogs run by sections of the American Sociological Association (like Work in Progress, from the Organizations, Occupations, and Work section) or other professional associations, and various group blogs.

And there is Contexts (of which I’m co-editor), the general interest magazine of ASA, where we would love to hear proposals for how you can bring your research out into the open (for the magazine or our blog).

Comment on Goffman’s survey, American Sociological Review rejection edition

7241915012_29fc2a7211_k
Peer Review, by Gary Night. https://flic.kr/p/c2WH2E

Background:

  • I reviewed Alice Goffman’s book, On The Run.
  • I complained that her dissertation was not made public, despite being awarded the American Sociological Association’s dissertation prize. I proposed a rule change for the association, requiring that the winning dissertation be “publicly available through a suitable academic repository by the time of the ASA meeting at which the award is granted.” (The rule change is moving through the process.)
  • When her dissertation was released, I complained about the rationale for the delay.
  • My critique of the survey that was part of her research grew into a formal comment (PDF) submitted to American Sociological Review.

In this post I don’t have anything to add about Alice Goffman’s work. This is about what we can learn from this and other incidents to improve our social science and its contribution to the wider social discourse. As Goffman’s TED Talk passed 1 million views, we have had good conversations about replicability and transparency in research, and about ethics in ethnography. And of course about the impact of criminal justice system and over-policing on African Americans, the intended target of her work. This post is about how we deal with errors in our scholarly publishing.

My comment was rejected by the American Sociological Review.

You might not realize this, but unlike many scientific journals, except for “errata” notices, which are for typos and editing errors, ASR has no normal way of acknowledging or correcting errors in research. To my knowledge ASR has never retracted an article or published an editor’s note explaining how an article, or part of an article, is wrong. Instead, they publish Comments (and Replies). The Comments are submitted and reviewed anonymously by peer reviewers just like an article, and then if the Comment is accepted the original author responds (maybe followed by a rejoinder). It’s a cumbersome and often combative process, often mixing theoretical with methodological critiques. And it creates a very high hurdle to leap, and a long delay, before the journal can correct itself.

In this post I’ll briefly summarize my comment, then post the ASR editors’ decision letter and reviews.

Comment: Survey and ethnography

I wrote the comment about Goffman’s 2009 ASR article for accountability. The article turned out to be the first step toward a major book, so ASR played a gatekeeping role for a much wider reading audience, which is great. But then it should take responsibility to notify readers about errors in its pages.

My critique boiled down to these points:

  • The article describes the survey as including all households in the neighborhood, which is not the case, and used statistics from the survey to describe the neighborhood (its racial composition and rates of government assistance), which is not justified.
  • The survey includes some number (probably a lot) of men who did not live in the neighborhood, but who were described as “in residence” in the article, despite being “absent because they were in the military, at job training programs (like JobCorp), or away in jail, prison, drug rehab centers, or halfway houses.” There is no information about how or whether such men were contacted, or how the information about them was obtained (or how many in her sample were not actually “in residence”).
  • The survey results are incongruous with the description of the neighborhood in the text, and — when compared with data from other sources — describe an apparently anomalous social setting. She reported finding more than twice as many men (ages 18-30) per household as the Census Bureau reports from their American Community Survey of Black neighborhoods in Philadelphia (1.42 versus .60 per household). She reported that 39% of these men had warrants for violating probation or parole in the prior three years. Using some numbers from other sources on violation rates, that translates into between 65% and 79% of the young men in the neighborhood being on probation or parole — very high for a neighborhood described as “nice and quiet” and not “particularly dangerous or crime-ridden.”
  • None of this can be thoroughly evaluated because the reporting of the data and methodology for the survey were inadequate to replicate or even understand what was reported.

You can read my comment here in PDF. Since I aired it out on this blog before submitting it, making it about as anonymous as a lot of other peer-review submissions, I see no reason to shroud the process any further. The editors’ letter I received is signed by the current editors — Omar Lizardo, Rory McVeigh, and Sarah Mustillo — although I submitted the piece before they officially took over (the editors at the time of my submission were Larry W. Isaac and Holly J. McCammon). The reviewers are of course anonymous. My final comment is at the end.

ASR letter and reviews

Editors’ letter:

25-Aug-2015

Dear Prof. Cohen:

The reviews are in on your manuscript, “Survey and ethnography: Comment on Goffman’s ‘On the Run’.” After careful reading and consideration, we have decided not to accept your manuscript for publication in American Sociological Review (ASR).  Our decision is based on the reviewers’ comments, our reading of the manuscript, an overall assessment of the significance of the contribution of the manuscript to sociological knowledge, and an estimate of the likelihood of a successful revision.

As you will see, there was a range of opinions among the reviewers of your submission.  Reviewer 1 feels strongly that the comment should not be published, reviewer 3 feels strongly that it should be published, and reviewer 2 falls in between.  That reviewer sees merit in the criticisms but also suggests that the author’s arguments seem overstated in places and stray at times from discussion that is directly relevant to a critique of the original article’s alleged shortcomings.

As editors of the journal, we feel it is essential that we focus on the comment’s critique of the original ASR article (which was published in 2009), rather than the recently published book or controversy and debate that is not directly related to the submitted comment.  We must consider not only the merits of the arguments and evidence in the submitted comment, but also whether the comment is important enough to occupy space that could otherwise be used for publishing new research.  With these factors in mind, we feel that the main result that would come from publishing the comment would be that valuable space in the journal would be devoted to making a point that Goffman has already acknowledged elsewhere (that she did not employ probability sampling).

As the author of the comment acknowledges, there is actually very little discussion of, or use of, the survey data in Goffman’s article.   We feel that the crux of the argument (about the survey) rests on a single sentence found on page 342 of the original article:  “The five blocks known as 6th street are 93 percent Black, according to a survey of residents that Chuck and I conducted in 2007.”  The comment author is interpreting that to mean that Goffman is claiming she conducted scientific probability sampling (with all households in the defined space as the sampling frame).  It is important to note here that Goffman does not actually make that claim in the article.  It is something that some readers might infer.  But we are quite sure that many other readers simply assumed that this is based on nonprobability sampling or convenience sampling.  Goffman speaks of it as a survey she conducted when she was an undergraduate student with one of the young men from the neighborhood.  Given that description of the survey, we expect many readers assumed it was a convenience sample rather than a well-designed probability sample.  Would it have been better if Goffman had made that more explicit in the original article?  Yes.

In hindsight, it seems safe to say that most scholars (probably including Goffman) would say that the brief mentions of the survey data should have been excluded from the article.  In part, this is because the reported survey findings play such a minor role in the contribution that the paper aims to make.

We truly appreciate the opportunity to review your manuscript, and hope that you will continue to think of ASR for your future research.

Sincerely,

Omar Lizardo, Rory McVeigh, and Sarah Mustillo

Editors, American Sociological Review

Reviewer: 1

This paper seeks to provide a critique of the survey data employed in Goffman (2009).  Drawing on evidence from the American Community Survey, the author argues that data presented in Goffman (2009) about the community in which she conducted her ethnography is suspect.  The author draws attention to remarkably high numbers of men living in households (compared with estimates derived from ACS data) and what s/he calls an “extremely high number” of outstanding warrants reported by Goffman.  S/he raises the concern that Goffman (2009) did not provide readers with enough information about the survey and its methodology for them to independently evaluate its merits and thus, ultimately, calls into question the generalizability of Goffman’s survey results.

This paper joins a chorus of critiques of Goffman’s (2009) research and subsequent book.  This critique is novel in that the critique is focused on the survey aspect of the research rather than on Goffman’s persona or an expressed disbelief of or distaste for her research findings (although that could certainly be an implication of this critique).

I will not comment on the reliability, validity or generalizability of Goffman’s (2009) evidence, but I believe this paper is fundamentally flawed.  There are two key problems with this paper.  First the core argument of the paper (critique) is inadequately situated in relation to previous research and theory.  Second, the argument is insufficiently supported by empirical evidence.

The framing of the paper is not aligned with the core empirical aims of the paper.  I’m not exactly sure what to recommend here because it seems as if this is written for a more general audience and not a sociological one.  It strikes me as unusual, if not odd, to reference the popularity of a paper as a motivation for its critique.  Whether or not Goffman’s work is widely cited in sociological or other circles is irrelevant for this or any other critique of the work.  All social science research should be held to the same standards and each piece of scholarship should be evaluated on its own merits.

I would recommend that the author better align the framing of the paper with its empirical punchline.  In my reading the core criticism of this paper is that the Goffman (2009) has not provided sufficient information for someone to replicate or validate her results using existing survey data.  Although it may be less flashy, it seems more appropriate to frame the paper around how to evaluate social science research.  I’d advise the author to tone down the moralizing and discussion of ethics.  If one is to levy such a strong (and strongly worded) critique, one needs to root it firmly in established methods of social science.

That leads to the second, and perhaps even more fundamental, flaw.  If one is to levy such a strong (and strongly worded) critique, one needs to provide adequate empirical evidence to substantiate her/his claims.  Existing survey data from the ACS are not designed to address the kinds of questions Goffman engages in the paper and thus it is not appropriate for evaluating the reliability or validity of her survey research.  Numerous studies have established that large scale surveys like the ACS under-enumerate black men living in cities.  They fall into the “hard-to-reach” population that evade survey takers and census enumerators.  Survey researchers widely acknowledge this problem and Goffman’s research, rather than resolving the issue, raises important questions about the extent to which the criminal justice system may contribute to difficulties for conventional social science research data collection methods.  Perhaps the author can adopt a different, more scholarly, less authoritative, approach and turn the inconsistencies between her/his findings with the ACS and Goffman’s survey findings into a puzzle.  How can these two surveys generate such inconsistent findings?

Just like any survey, the ACS has many strengths.  But, the ACS is not well-suited to construct small area estimates of hard-to-reach populations.  The author’s attempt to do so is laudable but the simplicity of her/his analysis trivializes the difficultly in reaching some of the most disadvantaged segments of the population in conventional survey research.  It also trivializes one of the key insights of Goffman’s work and one that has been established previously and replicated by others: criminal justice contact fundamentally upends social relationships and living arrangements.

Furthermore, the ACS doesn’t ask any questions about criminal justice contact in a way that can help establish the validity of results for disadvantaged segments of the population who are most at-risk of criminal justice contact.  It is impossible to determine using the ACS how many men (or women) in the United States, Pennsylvania, or Philadelphia (or any neighborhood therein), have an outstanding warrant.  The ACS doesn’t ask about criminal justice contact, it doesn’t ask about outstanding warrants, and it isn’t designed to tap into the transient experiences of many people who have had criminal justice contact.  The author provides no data to evaluate the validity of Goffman’s claims about outstanding warrants.  Advancements in social science cannot be established from a “she said”, “he said” debate (e.g., FN 9-10).  That kind of argument risks a kind of intellectual policing that is antithetical to established standards of evaluating social science research.  That being said, someone should collect this evidence or at a minimum estimate, using indirect estimation methods, what fraction of different socio-demographic groups have outstanding warrants.

Although I believe that this paper is fundamentally flawed both in its framing and provision of evidence, I would like to encourage the author to replicate Goffman’s research.  That could involve an extended ethnography in a disadvantaged neighborhood in Philadelphia or another similar city.  That could also involve conducting a small area survey of a disadvantaged, predominantly black, neighborhood in a city with similar criminal justice policies and practices as Philadelphia in the period of Goffman’s study.  This kind of research is painstaking, time consuming, and sorely needed exactly because surveys like the ACS don’t – and can’t – adequately describe or explain social life among the most disadvantaged who are most likely to be missing from such surveys.

Reviewer: 2

I read this manuscript several times. It is more than a comment, it seems. It is 1) a critique of the description of survey methods in GASR and 2) a request for some action from ASR “to acknowledge errors when they occur.” The errors here have to do with Goffman’s description of survey methods in GASR, which the author describes in detail. This dual focus read as distracting at times. The manuscript would benefit from a more squarely focused critique of the description of survey methods in GASR.

Still, the author’s comment raises some valid concerns. The author’s primary concern is that the survey Goffman references in her 2009 ASR article is not described in enough detail to assess its accuracy or usefulness to a community of scholars. The author argues that some clarification is needed to properly understand the claims made in the book regarding the prevalence of men “on the run” and the degree to which the experience of the small group of men followed closely by Goffman is representative of most poor, Black men in segregated inner city communities. The author also cites a recent publication in which Goffman claims that the description provided in ASR is erroneous. If this is the case, it seems prudent for ASR to not only consider the author’s comments, but also to provide Goffman with an opportunity to correct the record.

I am not an expert in survey methods, but there are moments where the author’s interpretation of Goffman’s description seems overstated, which weakens the critique. For example, the author claims that Goffman is arguing that the entirety of the experience of the 6th Street crew is representative of the entire neighborhood, which is not necessarily what I gather from a close reading of GASR (although it may certainly be what has been taken up in popular discourse on the book). While there is overlap of the experience of being “on the run,” namely, your life is constrained in ways that it isn’t for those not on the run, it does appear that Goffman also uses the survey to describe a population that is distinct in important ways from the young men she followed on 6th street. The latter group has been “charged for more serious offenses like drugs and violent crimes,” she writes (this is the group that Sharkey argues might need to be “on the run”), while the larger group of men, whose information was gathered using survey data, were typically dealing with “more minor infractions”: “In the 6th Street neighborhood, a person was occasionally ‘on the run’ because he was a suspect in a shooting or robbery, but most people around 6th street had warrants out for far more minor infractions [emphasis mine].”

So, as I read it (I’ve also read the book), there are two groups: one “on the run” as a consequence of serious offenses and others “on the run” as a consequence of minor infractions. The consequence of being “on the run” is similar, even if the reason one is “on the run” varies.

The questions that remain are questions of prevalence and generalizability. The author asks: How many men in the neighborhood are “on the run” (for any reason)? How similar is this neighborhood to other neighborhoods? Answers to this question do rely on an accurate description of survey methods and data, as the author suggests.

This leads us to the most pressing and clearly argued question from the author: What is the survey population? Is it 1) “people around 6th Street” who also reside in the 6th Street neighborhood (of which, based on Goffman’s definition of in residence, are distributed across 217 distinct households in the neighborhood, however the neighborhood is defined e.g., 5 blocks or 6 blocks) or 2) the entirety of the neighborhood, which is made up of 217 households. It appears from the explanation from Goffman cited by the author that it is the former (“of the 217 households we interviewed,” which should probably read, of the 308 men we interviewed, all of whom reside in the neighborhood (based on Goffman’s definition of residence), 144 had a warrant…). Either way, the author makes a strong case for the need for clarification of this point.

The author goes on to explain the consequences of not accurately distinguishing among the two possibilities described above (or some other), but it seems like a good first step would be to request a clarification (the author could do this directly) and to allow more space than is allowed in a newspaper article to provide the type of explanation that could address the concerns of the author.

Is this the purpose of the comment? Or is the purpose of the comment merely to place a critique on record?  The primary objective is not entirely clear in the present manuscript.

The author’s comment is strong enough to encourage ASR to think through possibilities for correcting the record. As a critique of the survey methods, the comment would benefit from more focus. The comment could also do a better job of contextualizing or comparing/contrasting the use of survey methods in GASR with other ethnographic studies that incorporate survey methods (at the moment such references appear in footnotes).

Reviewer: 3

This comment exposes major errors in the survey methodology for Goffman’s article.  One major flaw is that the goffman article describes the survey as inclusive of all households in the neighborhood but later, in press interviews, discloses that it is not representative of all households in the neighborhood.  Another flaw that the author exposes is goffman’s data and methodological reporting not being up to par to sociological standards.  Finally, the author argues that the data from the survey does not match the ethnographic data.

Overall, I agree with the authors assertions that the survey component is flawed.  This is an important point because the article claims a large component of its substance from the survey instrument.  The survey helped goffman to bolster generalizability , and arguably, garner worthiness of publication in ASR.  If the massive errors in the survey had been exposed early on it is possible that ASR might have held back on publishing this article.

I am in agreement that ASR should correct the error highlighted on page 4 that the data set is not of the entire neighborhood but of random households/individuals given the survey in an informal way and that the sampling strategy should be described.  Goffman should aknowledge that this was a non-representative convenience sample, used for bolstering field observations.  It would follow then that the survey component of the ASR article would have to be rendered invalid and that only the field data in the article should be taken at face value.  Goffman should also be asked to provide a commentary on her survey methodology.

The author points out some compelling anomalies from the goffman survey and general social survey data and other representative data.  At best, goffman made serious mistakes with the survey and needs to be asked to show those mistakes and her survey methodology or she made up some of the data in the survey and appropriate action must be taken by ASR.  I agree with the authors final assessment, that the survey results be disregarded and the article be republished without mention of such results or with mention of the results albeit showing all of its errors and demonstrating the survey methodology.

My response

Regular readers can probably imagine my long, overblown, hyperventilating response to Reviewer 1, so I’ll just leave that to your imagination. On the bottom line, I disagree with the editors’ decision, but I can’t really blame them. Would it really be worth some number of pages in the journal, plus a reply and rejoinder, to hash this out? Within the constraints of the ASR format, maybe the pages aren’t worth it. And the result would not have been a definitive statement anyway, but rather just another debate among sociologists.

What else could they have done? Maybe it would have been better if the editors could simply append a note to the article advising readers that the survey is not accurately described, and cautioning against interpreting it as representative — with a link to the comment online somewhere explaining the problem. (Even so of course Goffman should have a chance to respond, and so on.)

It’s just wrong that now the editors acknowledge there is something wrong in their journal — although we seem to disagree about how serious the problem is — but no one is going to formally notify the future readers of the article. That seems like bad scholarly communication. I’ve said from the beginning that there’s no need for a high-volume conversation about this, or attack on anyone’s integrity or motives. There are important things in this research, and it’s also highly flawed. Acknowledge the errors — so they don’t compound — and move on.

This incident can help us learn lessons with implications up and down the publishing system. Here are a couple. At the level of social science research reporting: don’t publish survey results data without sufficient methodological documentation — let’s have the instrument and protocol, the code, and access to the data. At the system level of publishing, why do we still have journals with cost-defined page limits? Because for-profit publishing is more important than scholarly communication. The sooner we get out from under that 19th-century habit the better.