Eran Shor responds

On May 8 I wrote about three articles by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena. (My post is here; the articles, in chronological order, are available in full here, here, and here.)

Eran Shor, associate professor of sociology at McGill University and first author of the papers in question, has sent me the following response, which I agreed to post unedited. I have not heard from the other authors, and Shor does not claim to speak for them here. I’m not responding to this now, except to say that I stand by the original post. Feel free to make (moderated) comments below.

Eran Shor’s response

We would like to thank Philip N. Cohen for posting this response to his blog, unedited.

Philip N. Cohen wrote a post in which he targets three of our recently published articles and claims that these are overlapping and misleading to readers. On the one hand, Cohen clarifies that he is “not judging Shor et al. for any particular violation of specific rules or norms” and “not judging the quality of the work overall.” However, in his conclusions he speaks about “overlapping papers”, “selling duplicative information as new” and “misleading readers”. We feel this terminology more than just hints at intentional wrongdoing. The first response to the blog outright accuses us of self-plagiarism, deceitfulness, and having questionable ethics, which we believe is directly the result of Cohen’s suggestive language.

Below, we explain why we feel that these accusations are unfair and mostly unsubstantiated. We also reflect on the debate over open science and on the practice of writing blogs that make this kind of accusations without first giving authors the chance to respond to them.

As for the three articles in question, we invite readers to read these for themselves and judge whether each makes a unique and original contribution. To quickly summarize these contributions as we see them:

  • The first article, published in Journalism Studies (2014), focuses on the historical development of women’s media representation, presenting new data that goes back to the 19th Century and discussing the historical shifts and differences between various sections of the newspaper.
  • The second article, published in Social Science Quarterly (2014) begins to tackle possible explanations for the persistent gap in representation and specifically focuses on the question of political partisanship in the media and its relationship with gendered coverage patterns. We use two separate measures of newspapers’ political slant and conduct bivariate analyses that examine the association between partisanships and representation.
  • The third article was published in the American Sociological Review (2015). In it, we conducted a wide-scope examination of a large variety of possible explanations for the persistent gender gap in newspapers. We presented a large gamut of new and original data and analyses (both bivariate and multivariate), which examined explanations such as “real-world” inequalities, newsroom composition, and various other factors related to both the newspapers themselves and the cities and states in which they are located.

Of note, these three articles are the result of more than seven years of intensive data collection from a wide variety of sources and multiple analyses, leading to novel contributions to the literature. We felt (and still do) that these various contributions could not have been clearly fleshed out in one article, not even a longer article, such as the ones published by the American Sociological Review.

Now for the blog: Did SSQ really “scoop” ASR?

First, where we agree with Cohen’s critique: the need to indicate clearly when one is presenting a piece of data or a figure that already appeared in another paper. Here, we must concede that we have failed, although certainly not intentionally. The reason we dropped the ball on this one is the well-established need to try to conceal one’s identity as long as a paper is under review, in order to maintain the standard of double-blind review. Clearly, we should have been more careful in checking the final copies of the SSQ and ASR articles and add a clarification stating that the figure already appeared in an earlier paper. Alternatively, we could have also dropped the figure from the paper and simply refer readers to the JS paper, as this figure was not an essential component of either of the latter two papers. As for the issue of the missing year in the ASR paper, this was simply a matter of re-examining the data and noting that the data for 1982 was not strong enough, as it relied on too few data points and a smaller sample of newspapers, and therefore was not equivalent to data from the following years. We agree, however, that we should have clarified this in the paper.

That said, as Cohen also notes in his blog, in each of the two latter papers (SSQ and ASR), the figure in question is really just a minor descriptive element, the starting point for a much larger—and different in each article—set of data and analyses. In both cases, we present the figure at the beginning of the paper in order to motivate the research questions and subsequent analyses and we do not claim that this is one of our novel findings or contributions. The reason we reproduced this figure is that reviewers of previous versions of the paper asked us to demonstrate the persistent gap between women and men in the news. The figure serves as a parsimonious and relatively elegant visual way to do that, but we also presented data in the ASR paper from a much larger set of newspapers that establishes this point. Blind-review norms prevented us from referring readers to our own work in a clear way (that is, going beyond simply including a citation to the work). Still, as noted above, we take full responsibility for not making sure to add a clearer reference to the final version. But we would like to emphasize that there was no intentional deceit here, but rather a simple act of omission. This should be clear from the fact that we had nothing to gain from not referring to our previous work and to the fact that these previous findings motivated the new analyses.

Duplicated analyses?

As for the other major charge against us in the blog, it is summarized in the following paragraph:

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Here we feel that Cohen is presenting a false picture of what we did in the two papers. First, the SSQ paper actually used two measures of political slant. For both measures, we presented bivariate analyses of the relationship between slant and coverage (which indeed shows a weak to moderate relationship). It is important to stress here that this somewhat rudimentary analysis was simply a matter of data availability: The bivariate cross-sectional analysis was the best analysis we could perform at the time when the paper was accepted for publication (end of 2013), given the data that we had available. However, during the following two years, leading to the publication of the article in ASR at the end of 2015, we engaged in an extensive and time-consuming process of collecting and coding additional data. This effort allowed us to code longitudinal data on important characteristics of newspapers (e.g., identity of editors and publishers and various city and state-related characteristics) for a subset of the newspapers in our sample and for six consecutive years.

And so, as is often the case when moving from a cross-sectional bivariate analysis (SSQ) to a longitudinal multivariate one (ASR), the previous weak relationship that we found for slant basically disappeared, or rather, it became non-significant. In the ASR paper, we did refer readers to the results of our previous study, although without providing details about the analysis itself, because we did not wish to single out our own work in a paragraph that also briefly cites the results of other similar studies (Potter (1985); Adkins Covert and Wasburn (2007)). Perhaps we should have clarified better in the final draft that we previously examined this relationship in a cross-sectional bivariate analysis. But this is a far cry from the allegation that we were reproducing the same analysis, or that we intentionally concealed evidence, both of which are simply false.

To be clear: While the SSQ paper presented a cross-sectional bivariate analysis, in the ASR paper we used a somewhat different sample of papers to perform a longitudinal multivariate analysis, in which newspaper slant was but one of many variables. These are important differences (leading to different results), which we believe any careful reader of the two papers can easily detect. Readers of the ASR article will also notice that the testing of the political slant question is not a major point in the paper, nor is it presented as such. In fact, this variable was originally included as merely a control variable, but reviewers asked that we flesh out the theoretical logic behind including it, which we did. We therefore feel that Cohen’s above comment (in parentheses)—“In addition to whatever else is new in the third paper”—is unfair to the point of being disingenuous. It ignores the true intent of the paper and its (many) unique contributions, for the purpose of more effectively scoring his main point.

As for the paragraph that Cohen cites in the blog, in which we use very similar language to theoretically justify the inclusion of the newspaper slant variable in the ASR analysis, we would like to clarify that this was simply the most straightforward way of conveying this theoretical outline. We make no pretense or implication whatsoever that this passage adds anything new or very important in its second iteration. And again, we do refer the readers to the previous work (including our own), which found conflicting evidence regarding this question in cross sectional bivariate analyses. Knowledge is advanced by building on previous work (your own and others) and adding to it, and this is exactly what we do here.

Misleading readers and selling duplicated information as new?

Given our clarifications above, we feel that the main charges against us are unjust. The three papers in question are by no means overlapping duplications (although the one particular descriptive figure is). In fact, none of the analyses in SSQ and ASR are overlapping, and each paper made a unique contribution at the time it was published. Furthermore, the charges that we are “selling duplicate information as new” and “misleading readers” clearly imply that we have been duplicitous and dishonest in this research effort. It is not surprising that such inflammatory language ended up inciting respondents to the blog to accuse us flatly of self-plagiarism, deceitfulness, and questionable ethics.

In response to such accusations, we once again wish to state very clearly that at no point did we intend to deceive readers or intentionally omit information about previous publications. While we admit to erring in not clearly mentioning in the caption of the figure that it was already reported in a previous study, this was an honest mistake, and one which did not and could not be to our benefit in any way whatsoever. An error of omission it was, but not a violation of any ethical norms.

Is open access the solution?

We would also like to comment on the more general claim of the blog about the system being broken and the solution lying in open access briefly. We would actually like to express our firm support for Cohen’s general efforts to promote open science. We also agree with the need to both carefully monitor and rethink our publication system, as well as with the call for open access to journals and a more transparent reviewing process. All of these would bring important benefits and the conversation over them should continue.

However, we question the assumption that in this particular case an open access system would have solved the problem of not mentioning that our figure appeared in previous articles. As we note above, this omission was actually triggered by the blind review system and our attempts to avoid revealing our identity during the reviewing process (and later on to our failure to remember to add more direct references to our previous work in the final version of the article). But surely, most reviewers who work at academic institutions have access through their local libraries to a broad range of journals, including ones that are behind paywalls (and certainly to most mainstream journals). Our ASR paper was reviewed by nine different anonymous reviewers, as well as by the editorial board. It seems reasonable to assume that virtually all of them would have been able to access the previous papers published in mainstream journals. So the fact that our previous articles were published in journals with paywalls seems neither here nor there for the issues Cohen raises about our work.

A final word: On the practice of making personal accusations in a blog without first soliciting a response from the authors

The commonly accepted way to proceed in our field is that when one scholar wishes to criticize the published work of another, they write a comment to the journal that published the article. The journal then solicits a response from the authors who are being criticized, and a third party then decides whether this is worthy of publication. Readers then get the chance to read both the critique and the response at the same time and decide which point of view they find more convincing. It seems to us that this is the decent way of proceeding in cases where there are different points of view or disagreement over scholarly findings and interpretations. Moreover, when the critique involves charges (or hints) of unethical behavior and academic dishonesty. In such cases, this norm of basic decency seems to us to be even more important. In our case, Professor Cohen did not bother to approach us, and did not ask us to respond to the accusations against us. In fact, we only learned about the post by happenstance, when a friend directed our attention to the blog. When we responded and asked Cohen to make some clarifications to the original posting, we were turned down, although Cohen kindly agreed to publish this comment to his blog, unedited, for which we are thankful.

Of course, online blogs are not expected to honor the norms of civilized scholarly debate to the letter. They are a different kind of forum. Clearly, they have their advantages in terms of both speed and accessibility, and they form an important part of the current academic discourse. But it seems to us that, especially in such cases where allegations of ethically questionable conduct are being made, the authors of blogs should adopt a more careful approach. After all, this is not merely a matter of academic disagreements; people’s careers and reputations are at stake. We would like to suggest that in such cases the authors of blogs should err on the side of caution and allow authors to defend themselves against accusations in advance and not after the fact, when much of the damage has already been done.

 

27 thoughts on “Eran Shor responds

  1. This response–that the JS article showed trends, the SSQ article showed bivariate analyses, and the ASR article used multivariate analyses–is NOT a great rebuttal against the charge of salami slicing. Would we be better off if all quantitative sociologists doubled (or tripled) our pub count by doing this? I think not.

    My reading of Cohen is that in open science (combined with us getting away from using publication counts as a measure of scholar quality) salami slicing becomes irrelevant. We can make the improvements in our data and methods with transparency and frankly more efficiency for scholars, readers, and reviewers that does not happen when we publish trends in one article, bivariate analyses in another, and multivariate analyses in yet a third article.

    Liked by 2 people

    1. It seems that you did not read Shor’s reply carefully. He Clarifies that the bivariate analysis was simply the only analysis that they could do at the time, because this is all the data they had. In the following two years they were able to collect additional data, leading to the ASR article. And the JS article simply asked a different question: What was the historical development of women’s media representation, rather than how we might explain the current stagnation.

      “Salami slicing” would suggest that the authors had the full set of data and analyses at hand, and then decided to cut it into three, which is clearly not the case.

      Like

        1. I believe this point is also explained well by Shor. They did report on the results of their previous study, only not in detail, in order not to reveal their identity to reviewers and not to single out their study, given that it was not the only one that addressed this question before.

          And it seems to me that in your post you actually accused them of replicating their analysis and presenting it as new (reporting the same findings). Reading the two articles it becomes clear that this is not what they did (you seem to acknowledge it here by writing that there’s nothing wrong with doing research incrementally). So when you write that you “stand by your original post”, do you mean every word? Or do you concede that at least some of what you wrote may not be accurate?

          Like

          1. I didn’t say they replicated the analysis. I said they tested the same hypothesis with the same data. Obviously the methods were different, but they gave no indication the two were part of the same project. If they had said, “we did it better this time,” or even — to preserve the facade of anonymity — “we did it better than Shor et al (2014),” that would have been fine. Someone should tell the reader that this replaces the previous analysis, which seems like the point if they’re saying the later analysis was better.

            Like

  2. Was Shor et al “salami slicing”, or as suggested by myra and olderwoman in the previous post, engaging in “intellectual dishonesty” or “self-plagiarism”? I don’t know if academic norms are clear on this.

    Are any of the three journals where these articles were published aware of this?

    Like

  3. Full Disclosure: I am friends with Eran Shor. I have had interactions with Neal Carren (all positive), and outside his research, I do not know Philip Cohen.

    I would first ask everyone to read all three articles closely before responding to the blog. In my readings, I do not see academic dishonesty – and especially not plagiarism.

    In comparing the three articles, I can see and understand the argument for open science, as most of our research (here I am referring to academic sociologists) is continuously evolving and developing as we move forward. We ask new questions. We collect new/more data. We see things differently. This is part of the research process. Most of our work follows this pattern, and these three papers certainly adhere to this theme.

    What is interesting about the responses to this blog is that most people are not engaging the issues of open science, but instead they are engaging academic dishonesty, which I do not believe was (or, perhaps, should not be) the goal of the post. As Philip stated, “I’m not judging Shor et al. for any particular violation of specific rules or norms.” Yet, because of the blog presentation of the three articles, responses to the blog are focusing on the “dishonesty” of Shor et al. I do not believe the aim of those responding to the blog should be to discredit Shor and his co-authors, as all of their articles DO add to our knowledge base, and all three studies DO differ (again, I ask you to read them). Rather, the goal should really be on the process by which we are judged according to publications – how often, where, and to what end. By doing this, we enter into a fruitful discussion about change and possibilities of change. This is what we should be focusing on.

    Let’s move away from discussions that malign academics, who have done nothing wrong except to add to our knowledge, and instead move toward engaging in much needed discussions about open science and existing publication standards.

    For example, I can say that at my state institution there are on-going discussions about open science/source publishing (creating one large database/data bank) to try and move beyond journals (due to their predatory nature), as well as making research available in real time. The issue that comes up time and time again is how do you restructure disciplines that are built on specific tenure and promotion expectations – whether that is based on getting multiple publications, or publications in specific journals, etc. Not to mention how to reshape this approach beyond just one specific institution, since most external tenure and promotion reviewers are primed to write letters based on publication records. The use of open science/source is an important and necessary discussion to have. Let’s restart our conversation here.

    Like

    1. I completely agree that’s where the conversation should go. I am hoping to make a contribution in that direction by establishing a free, open paper server, linked to the Open Science Framework — one that might eventually include a post-publication peer review mechanism. I even got a URL: socarxiv.org. Stay tuned!

      Like

    2. FYI, “full disclosure” would actually include your name.

      I don’t know why so many people refuse to have a conversation about sociology and the profession under their own names.

      Like

      1. It’s because we are afraid of what you will write about us on your blog, Phil. I think some people are scared about entering these discussions. We’ll see if this comment gets deleted.

        Like

  4. I’m befuddled by the notion that the descriptive results, which are clearly duplicated and do not just reflect a failure to attribute a figure (as if ASR would knowingly have published the same figure that had been published twice before, if only it were attributed), are somehow not “results” or “analysis.” The term analysis does not just apply to multivariate models or sections labeled “results.”

    This is the text from ASR, introducing the descriptive result as original:

    “Our own data reveal similar trends (for details, see Shor et al. 2014b, and the Data and Analyses sections of this article). Figure 1 presents the rate of female names in 13 major U.S. newspapers for which historical data are available from scanned content. Male names have historically received at least four times as much exposure as female names and this ratio is still nearly 3:1 by the end of our observation period.”

    Our publishing system is deeply broken, and one one of the ways it’s broken is that it incentivizes not-good behavior.

    I’m sure there are lots of people who made totally innocent, unintentional errors that resulted in their papers *not* being published in the top journal in their field, but forgive my skepticism when we hear of those that do.

    Like

    1. Phil, I really wonder how you can say that the paragraph you bring above introduces the results as original when the very first sentence very clearly directs readers to the previous article that first introduced these findings? What is this if not a reference to the former article?

      You also “neglect” to mention that this appears on the third page of the article, motivating the real research questions, and that the next 16 pages introduce an original theoretical frame and a large gamut of new analyses, under the heading “analyses”. The only thing that is wrong here is the omission of caption noting that the figure itself appeared in the previous article (and probably you are right that the figure should not have appeared at all, as it is not an important piece of evidence and the former findings could just have been reported). Shor, by the way, agrees with this, but again it all strikes me as quite minor nitpicking.

      To me, this whole debate seems quite ridiculous. It looks like you started off with big claims about replicated analyses and intentional misleading of readers and are now just avoiding admitting that you were wrong and that this is not even a good example for the issue you wanted to advance – open science. For this issue, Shor’s suggestion that this would not solve the problem actually seems quite plausible.

      Beyond that, I’m still waiting for you to respond to the question that I asked in the other post, and here I’m in agreement with Concerned Sociologist that the tone of the post is becoming unprofessional: Do you still think that it’s a good idea to accuse colleagues of academic dishonesty without allowing them the chance to respond to the accusation? (according to Shor, they were not approached by you for a response and only learned about the post by chance).

      Finally, the idea that tenured professors are somehow a legitimate target for (ridiculous in my view) accusations of academic dishonesty, just because they cannot be easily fired seems outlandish. There is more to an academic career than getting tenure, and tarnishing people’s name is not okay regardless of their academic position, not to mention doing it without even soliciting their response. And not that it really matters, but it also seems like not all the individuals involved actually have tenure.

      Like

  5. I agree with you, Phil. I have concerns, however. You are becoming a leader in our field. A leader needs to facilitate solutions to these problems. The tone of your post is borderline unprofessional. Worse, it is setting a standard for junior scholars to trash each other online. I hope this kind of discourse does not descend into cyber-bulling. After all, anyone can tear down a barn but it takes a carpenter to build one. Sometimes, it seems like you attend to tearing down other people instead of building up ways to solve the problems in our field.

    Like

    1. Unless I’m forgetting someone, I don’t think I’ve ever seriously criticized a sociologist who wasn’t a tenured professor, and only for their published work, not their private behavior. And I have always provided a forum for response. Tenure means people need less deference, not more. This is what intellectual debate looks like.

      Liked by 1 person

    2. As you accuse me of unprofessional and bullying behavior, I invite you to explain why you have chosen to remain anonymous. What is the ethical standard at work there? I made my complaints public under my own name, and provided Shor the opportunity to post thousands of words in response. You may disagree with my opinion, but the I feel ok about my ethical behavior.

      Liked by 1 person

  6. I read all three articles at issue. And I actually do see them as distinct and separate — and useful — contributions, though it is regrettable that the intro table was replicated with no attribution and a paragraph was also replicated. the authors have addressed that. It is unfortunate that some readers of the blog might feel the professional integrity of the authors was at issue. In my view it is not.
    A key issue, which is actually not new, is how to negotiate clearly the tensions between salami slicing a data set into many — too many — micro papers, and using a data set to develop a “research agenda” which can include interrelated and hopefully cumulative papers.

    Like

    1. [FYI: I deleted several comments that devolved into personal attacks and accusations that the writers would be subject to reprisal from me if they identified themselves. I welcome new comments but I’ll delete them if they continue in that vein.]

      Like

  7. Phil — to your question about anonimity – it seems like many (most) of us prefer to remain anonymous here for various reasons (I have my own). Hopefully, you are not completely unaware of power relations in academia, and this certainly may be one reason, but there might be others. In my view, this is legitimate as long as the discussion is fairly civilized.

    Like

    1. Of course there are power relations. But if you think that means you can’t comment on this blog post without suffering some egregious personal harm than you are either a bad judge of risk or so risk averse that you probably aren’t going to be able to do very interesting work as a sociologist. I don’t know you, but I wish people who take this position would develop a little more courage. People have fought for the privilege of expressing their views publicly – views that actually threaten people in power, unlike anything we’re talking about – and you do them a disservice by cowering behind anonymity in a simple exchange of non-threatening opinions.

      Like

  8. Anyway, I find it unfortunate that the discussion is about anonymity instead of the actual contentions. Having actually read the three articles in question (I recommend that anyone who chooses to participate in this debate does that first; I actually found them quite interesting and informative, in particular the ASR one), I am convinced that you seriously mischaracterized what they did and made some false accusations. Others who comment here seem to agree with me on this, even if they use more nuanced language.

    Like

    1. It’s about anonymity because the other points have already been covered. I have my opinion, he gave his response, which I found unconvincing. Nothing new since then on the substance.

      Like

  9. I try to stay out of this debate, beyond my response letter.

    Just wanted to add one clarification in response to Phil’s claim above that we “tested the same hypothesis with the same data”, but the method was different.

    In fact, to explicitly state what is probably obvious to most readers, the longitudinal analysis, beyond using a different method, also relies on different data (longitudinal, not cross sectional). Indeed, as I explain in my response, we engaged in additional data collection, on which the ASR analyses are based. So we referred the readers to the conflicting findings of previous research (including our own), and then tested this hypothesis (and many others) with different (richer) data and a different analysis (longitudinal and multivariate).

    Like

  10. I am concerned about Shor’s response that they omitted a fuller citation only to protect their anonymity for the peer review process. It is my understanding that in most journals you submit a full version to the editor and one that is blinded for peer review. Even if these journals do not ask for that, the right thing to do would have been to submit a version with full disclosure to the editor, so that the editor can know what parts of the work have been done before and if there is a duplication of figures or results. Not telling the editor about this, even though the editor knows who the authors are, seems, IMHO, unprofessional at best and unethical at worst.

    Forgive me for the anonymity, but as an un-tenured professor, I obviously do not want to disclose my name.

    Like

Comments welcome (may be moderated)