Tag Archives: academia

Perspective on sociology’s academic hierarchy and debate

2389844916_e9cc979eb9_o

Keep that gate. (Photo by Rob Nunn, https://flic.kr/p/4DbzCG)

It’s hard to describe the day I got my first acceptance to American Sociological Review. There was no social media back then so I have no record of my reaction, but I remember it as the day — actually, the moment, as the conditional acceptance slipped out of the fax machine — that I learned I was getting tenure, that I would have my dream job for the rest of my life, with a personal income in the top 10 percent of the country for a 9-month annual commitment. At that moment I was not inclined to dwell on the flaws in our publishing system, its arbitrary qualities, or the extreme status hierarchy it helps to construct.

In a recent year ASR considered more than 700 submitted articles and rejected 90% or more of them (depending on how you count). Although many people dispute the rationality of this distinction, publishing in our association’s flagship journal remains the most universally agreed-upon indicator of scholarship quality. And it is rare. I randomly sampled 50 full-time sociology faculty listed in the 2016 ASA Guide to Graduate Departments of Sociology (working in the U.S. and Canada), and found that 9, or 18%, had ever published a research article in ASR.

Not only is it rare, but publication in ASR is highly concentrated in high-status departments (and individuals). While many departments have no faculty that have published in ASR (I didn’t count these, but there are a lot), some departments are brimming with them. In my own, second-tier department, I count 16 out of 27 faculty with publications in ASR (59%), while at a top-tier, article-oriented department such as the University of North Carolina at Chapel Hill (where I used to work), 19 of the 25 regular faculty, or 76%, have published in ASR (many of them multiple times).

Without diminishing my own accomplishment (or that of my co-authors), or the privilege that got me here, I should be clear that I don’t think publication in high-status journals is a good way to identify and reward scholarly accomplishment and productivity. The reviews and publication decisions are too uneven (although obviously not completely uncorrelated with quality), and the limit on articles published is completely arbitrary in an era in which the print journal and its cost-determined page-limit is simply ridiculous.

We have a system that is hierarchical, exclusive, and often arbitrary — and the rewards it doles out are both large and highly concentrated.

I say all this to put in perspective the grief I have gotten for publicly criticizing an article published in ASR. In that post, I specifically did not invoke ethical violations or speculate on the motivations or non-public behavior of the authors, about whom I know nothing. I commented on the flaws in the product, not the process. And yet a number of academic critics responded vociferously to what they perceive as the threats this commentary posed to the academic careers and integrity of the authors whose work I discussed. Anonymous critics called my post “obnoxious, childish, time wasting, self promoting,” and urged sociologists to “shun” me. I have been accused of embarking on a “vigilante mission.” In private, a Jewish correspondent referred me to the injunction in Leviticus against malicious gossip in an implicit critique of my Jewish ethics.*

In the 2,500-word response I published on my site — immediately and unedited — I was accused of lacking “basic decency” for not giving the authors a chance to prepare a response before I posted the criticism on my blog. The “commonly accepted way” when “one scholar wishes to criticize the published work of another,” I was told, is to go through a process of submitting a “comment” to the journal that published the original work, which “solicits a response from the authors who are being criticized,” and it’s all published together, generally years later. (Never mind that journals have no obligation or particular inclination to publish such debates, as I have reported on previously, when ASR declined for reasons of “space” to publish a comment pointing out errors that were not disputed by the editors.)

This desire to maintain gatekeepers to police and moderate our discussion of public work is not only quaint, it is corrosive. Despite pointing out uncomfortable facts (which my rabbinical correspondent referred to as the “sin of true speech for wrongful purpose”), my criticism was polite, reasoned, with documentation — and within the bounds of what would be considered highly civil discourse in any arena other than academia, apparently. Why are the people whose intellectual work is most protected most afraid of intellectual criticism?

In Christian Smith’s book, The Sacred Project of American Sociology (reviewed here), which was terrible, he complains explicitly about the decline of academic civilization’s gatekeepers:

The Internet has created a whole new means by which the traditional double-blind peer-review system may be and already is in some ways, I believe, being undermined. I am referring here to the spate of new sociology blogs that have sprung up in recent years in which handfuls of sociologists publicly comment upon and often criticize published works in the discipline. The commentary published on these blogs operates outside of the gatekeeping systems of traditional peer review. All it takes to make that happen is for one or more scholars who want to amplify their opinions into the blogosphere to set up their own blogs and start writing.

Note he is complaining about people criticizing published work, yet believes such criticism undermines the blind peer-review system. This fear is not rational. The terror over public discussion and debate — perhaps especially among the high-status sociologists who happen to also be the current gatekeepers — probably goes a long way toward explaining our discipline’s pitiful response to the crisis of academic publishing. According to my (paywalled) edition of the Oxford English Dictionary, the definition of “publish” is “to make public.” And yet to hear these protests you would think the whisper of a public comment poses an existential threat to the very people who have built their entire profession around publishing (though, to be consistent, it’s mostly hidden from the public behind paywalls).

This same fear leads many academics to insist on anonymity even in normal civil debates over research and our profession. Of course there are risks, as there tend to be when people make important decisions about things that matter. But at some point, the fear of repression for expressing our views (which is legitimate in some rare circumstances) starts looking more like avoidance of the inconvenience or discomfort of having to stand behind our words. If academics are really going to lose their jobs for getting caught saying, “Hey, I think you were too harsh on that paper,” then we are definitely having the wrong argument.

“After all,” wrote Eran Shor, “this is not merely a matter of academic disagreements; people’s careers and reputations are at stake.” Of course, everyone wants to protect their reputation — and everyone’s reputation is always at stake. But let’s keep this in perspective. For those of us at or near the top of this prestige hierarchy — tenured faculty at research universities — damage to our reputations generally poses a threat only within a very narrow bound of extreme privilege. If my reputation were seriously damaged, I would certainly lose some of the perks of my job. But the penalty would also include a decline in students to advise, committees to serve on, and journals to edit — and no change in that lifetime job security with a top-10% salary for a 9-month commitment. Of course, for those of us whose research really is that important, anything that harms our ability to work in exactly the way that we want to has costs that simply cannot be measured. I wouldn’t know about that.

But if we want the high privilege of an academic career — and if we want a discipline that can survive under scrutiny from an increasingly impatient public and deepening market penetration — we’re going to have to be willing to defend it.

* I think if random Muslims have to denounce ISIS then Jews who cite Leviticus on morals should have to explain whether — despite the obvious ethical merit to some of those commands — they also support the killing of animals just because they have been raped by humans.

3 Comments

Filed under Uncategorized

Eran Shor responds

On May 8 I wrote about three articles by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena. (My post is here; the articles, in chronological order, are available in full here, here, and here.)

Eran Shor, associate professor of sociology at McGill University and first author of the papers in question, has sent me the following response, which I agreed to post unedited. I have not heard from the other authors, and Shor does not claim to speak for them here. I’m not responding to this now, except to say that I stand by the original post. Feel free to make (moderated) comments below.

Eran Shor’s response

We would like to thank Philip N. Cohen for posting this response to his blog, unedited.

Philip N. Cohen wrote a post in which he targets three of our recently published articles and claims that these are overlapping and misleading to readers. On the one hand, Cohen clarifies that he is “not judging Shor et al. for any particular violation of specific rules or norms” and “not judging the quality of the work overall.” However, in his conclusions he speaks about “overlapping papers”, “selling duplicative information as new” and “misleading readers”. We feel this terminology more than just hints at intentional wrongdoing. The first response to the blog outright accuses us of self-plagiarism, deceitfulness, and having questionable ethics, which we believe is directly the result of Cohen’s suggestive language.

Below, we explain why we feel that these accusations are unfair and mostly unsubstantiated. We also reflect on the debate over open science and on the practice of writing blogs that make this kind of accusations without first giving authors the chance to respond to them.

As for the three articles in question, we invite readers to read these for themselves and judge whether each makes a unique and original contribution. To quickly summarize these contributions as we see them:

  • The first article, published in Journalism Studies (2014), focuses on the historical development of women’s media representation, presenting new data that goes back to the 19th Century and discussing the historical shifts and differences between various sections of the newspaper.
  • The second article, published in Social Science Quarterly (2014) begins to tackle possible explanations for the persistent gap in representation and specifically focuses on the question of political partisanship in the media and its relationship with gendered coverage patterns. We use two separate measures of newspapers’ political slant and conduct bivariate analyses that examine the association between partisanships and representation.
  • The third article was published in the American Sociological Review (2015). In it, we conducted a wide-scope examination of a large variety of possible explanations for the persistent gender gap in newspapers. We presented a large gamut of new and original data and analyses (both bivariate and multivariate), which examined explanations such as “real-world” inequalities, newsroom composition, and various other factors related to both the newspapers themselves and the cities and states in which they are located.

Of note, these three articles are the result of more than seven years of intensive data collection from a wide variety of sources and multiple analyses, leading to novel contributions to the literature. We felt (and still do) that these various contributions could not have been clearly fleshed out in one article, not even a longer article, such as the ones published by the American Sociological Review.

Now for the blog: Did SSQ really “scoop” ASR?

First, where we agree with Cohen’s critique: the need to indicate clearly when one is presenting a piece of data or a figure that already appeared in another paper. Here, we must concede that we have failed, although certainly not intentionally. The reason we dropped the ball on this one is the well-established need to try to conceal one’s identity as long as a paper is under review, in order to maintain the standard of double-blind review. Clearly, we should have been more careful in checking the final copies of the SSQ and ASR articles and add a clarification stating that the figure already appeared in an earlier paper. Alternatively, we could have also dropped the figure from the paper and simply refer readers to the JS paper, as this figure was not an essential component of either of the latter two papers. As for the issue of the missing year in the ASR paper, this was simply a matter of re-examining the data and noting that the data for 1982 was not strong enough, as it relied on too few data points and a smaller sample of newspapers, and therefore was not equivalent to data from the following years. We agree, however, that we should have clarified this in the paper.

That said, as Cohen also notes in his blog, in each of the two latter papers (SSQ and ASR), the figure in question is really just a minor descriptive element, the starting point for a much larger—and different in each article—set of data and analyses. In both cases, we present the figure at the beginning of the paper in order to motivate the research questions and subsequent analyses and we do not claim that this is one of our novel findings or contributions. The reason we reproduced this figure is that reviewers of previous versions of the paper asked us to demonstrate the persistent gap between women and men in the news. The figure serves as a parsimonious and relatively elegant visual way to do that, but we also presented data in the ASR paper from a much larger set of newspapers that establishes this point. Blind-review norms prevented us from referring readers to our own work in a clear way (that is, going beyond simply including a citation to the work). Still, as noted above, we take full responsibility for not making sure to add a clearer reference to the final version. But we would like to emphasize that there was no intentional deceit here, but rather a simple act of omission. This should be clear from the fact that we had nothing to gain from not referring to our previous work and to the fact that these previous findings motivated the new analyses.

Duplicated analyses?

As for the other major charge against us in the blog, it is summarized in the following paragraph:

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Here we feel that Cohen is presenting a false picture of what we did in the two papers. First, the SSQ paper actually used two measures of political slant. For both measures, we presented bivariate analyses of the relationship between slant and coverage (which indeed shows a weak to moderate relationship). It is important to stress here that this somewhat rudimentary analysis was simply a matter of data availability: The bivariate cross-sectional analysis was the best analysis we could perform at the time when the paper was accepted for publication (end of 2013), given the data that we had available. However, during the following two years, leading to the publication of the article in ASR at the end of 2015, we engaged in an extensive and time-consuming process of collecting and coding additional data. This effort allowed us to code longitudinal data on important characteristics of newspapers (e.g., identity of editors and publishers and various city and state-related characteristics) for a subset of the newspapers in our sample and for six consecutive years.

And so, as is often the case when moving from a cross-sectional bivariate analysis (SSQ) to a longitudinal multivariate one (ASR), the previous weak relationship that we found for slant basically disappeared, or rather, it became non-significant. In the ASR paper, we did refer readers to the results of our previous study, although without providing details about the analysis itself, because we did not wish to single out our own work in a paragraph that also briefly cites the results of other similar studies (Potter (1985); Adkins Covert and Wasburn (2007)). Perhaps we should have clarified better in the final draft that we previously examined this relationship in a cross-sectional bivariate analysis. But this is a far cry from the allegation that we were reproducing the same analysis, or that we intentionally concealed evidence, both of which are simply false.

To be clear: While the SSQ paper presented a cross-sectional bivariate analysis, in the ASR paper we used a somewhat different sample of papers to perform a longitudinal multivariate analysis, in which newspaper slant was but one of many variables. These are important differences (leading to different results), which we believe any careful reader of the two papers can easily detect. Readers of the ASR article will also notice that the testing of the political slant question is not a major point in the paper, nor is it presented as such. In fact, this variable was originally included as merely a control variable, but reviewers asked that we flesh out the theoretical logic behind including it, which we did. We therefore feel that Cohen’s above comment (in parentheses)—“In addition to whatever else is new in the third paper”—is unfair to the point of being disingenuous. It ignores the true intent of the paper and its (many) unique contributions, for the purpose of more effectively scoring his main point.

As for the paragraph that Cohen cites in the blog, in which we use very similar language to theoretically justify the inclusion of the newspaper slant variable in the ASR analysis, we would like to clarify that this was simply the most straightforward way of conveying this theoretical outline. We make no pretense or implication whatsoever that this passage adds anything new or very important in its second iteration. And again, we do refer the readers to the previous work (including our own), which found conflicting evidence regarding this question in cross sectional bivariate analyses. Knowledge is advanced by building on previous work (your own and others) and adding to it, and this is exactly what we do here.

Misleading readers and selling duplicated information as new?

Given our clarifications above, we feel that the main charges against us are unjust. The three papers in question are by no means overlapping duplications (although the one particular descriptive figure is). In fact, none of the analyses in SSQ and ASR are overlapping, and each paper made a unique contribution at the time it was published. Furthermore, the charges that we are “selling duplicate information as new” and “misleading readers” clearly imply that we have been duplicitous and dishonest in this research effort. It is not surprising that such inflammatory language ended up inciting respondents to the blog to accuse us flatly of self-plagiarism, deceitfulness, and questionable ethics.

In response to such accusations, we once again wish to state very clearly that at no point did we intend to deceive readers or intentionally omit information about previous publications. While we admit to erring in not clearly mentioning in the caption of the figure that it was already reported in a previous study, this was an honest mistake, and one which did not and could not be to our benefit in any way whatsoever. An error of omission it was, but not a violation of any ethical norms.

Is open access the solution?

We would also like to comment on the more general claim of the blog about the system being broken and the solution lying in open access briefly. We would actually like to express our firm support for Cohen’s general efforts to promote open science. We also agree with the need to both carefully monitor and rethink our publication system, as well as with the call for open access to journals and a more transparent reviewing process. All of these would bring important benefits and the conversation over them should continue.

However, we question the assumption that in this particular case an open access system would have solved the problem of not mentioning that our figure appeared in previous articles. As we note above, this omission was actually triggered by the blind review system and our attempts to avoid revealing our identity during the reviewing process (and later on to our failure to remember to add more direct references to our previous work in the final version of the article). But surely, most reviewers who work at academic institutions have access through their local libraries to a broad range of journals, including ones that are behind paywalls (and certainly to most mainstream journals). Our ASR paper was reviewed by nine different anonymous reviewers, as well as by the editorial board. It seems reasonable to assume that virtually all of them would have been able to access the previous papers published in mainstream journals. So the fact that our previous articles were published in journals with paywalls seems neither here nor there for the issues Cohen raises about our work.

A final word: On the practice of making personal accusations in a blog without first soliciting a response from the authors

The commonly accepted way to proceed in our field is that when one scholar wishes to criticize the published work of another, they write a comment to the journal that published the article. The journal then solicits a response from the authors who are being criticized, and a third party then decides whether this is worthy of publication. Readers then get the chance to read both the critique and the response at the same time and decide which point of view they find more convincing. It seems to us that this is the decent way of proceeding in cases where there are different points of view or disagreement over scholarly findings and interpretations. Moreover, when the critique involves charges (or hints) of unethical behavior and academic dishonesty. In such cases, this norm of basic decency seems to us to be even more important. In our case, Professor Cohen did not bother to approach us, and did not ask us to respond to the accusations against us. In fact, we only learned about the post by happenstance, when a friend directed our attention to the blog. When we responded and asked Cohen to make some clarifications to the original posting, we were turned down, although Cohen kindly agreed to publish this comment to his blog, unedited, for which we are thankful.

Of course, online blogs are not expected to honor the norms of civilized scholarly debate to the letter. They are a different kind of forum. Clearly, they have their advantages in terms of both speed and accessibility, and they form an important part of the current academic discourse. But it seems to us that, especially in such cases where allegations of ethically questionable conduct are being made, the authors of blogs should adopt a more careful approach. After all, this is not merely a matter of academic disagreements; people’s careers and reputations are at stake. We would like to suggest that in such cases the authors of blogs should err on the side of caution and allow authors to defend themselves against accusations in advance and not after the fact, when much of the damage has already been done.

 

26 Comments

Filed under Me @ work

How broken is our system (hit me with that figure again edition)

Why do sociologists publish in academic journals? Sometimes it seems improbable that the main goal is sharing information and advancing scientific knowledge. Today’s example of our broken system, brought to my attention by Neal Caren, concerns three papers by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena (Shor et al).

May 13, 2016 update: Eran Shor has sent me a response, which I posted here.

In a paywalled 2013 paper in Journalism Studies, the team used an analysis of names appearing in newspapers to report the gender composition of people mentioned. They analyzed the New York Times back to 1880, and then a larger sample of 13 newspapers from 1982 through 2005. Here’s one of their figures:

shor1

The 2013 paper was a descriptive analysis, establishing that men are mentioned more than women over time.

In a paywalled 2014 article in Social Science Quarterly (SSQ) the team followed up. Except for a string-cite mention in the methods section, the second paper makes no reference to the first, giving no indication that the two are part of a developing project. They use this figure to motivate the analysis in the second paper, with no acknowledgment that it also appeared in the first:

shor2

Shor et al. 2014 asked,

How can we account for the consistency of these disparities? One possible factor that may explain at least some of these consistent gaps may be the political agendas and choices of specific newspapers.

Their hypothesis was:

H1: Newspapers that are typically classified as more liberal will exhibit a higher rate of female-subjects’ coverage than newspapers typically classified as conservative.

After analyzing the data, they concluded:

The proposition that liberal newspapers will be more likely to cover female subjects was not supported by our findings. In fact, we found a weak to moderate relationship between the two variables, but this relationship is in the opposite direction: Newspapers recognized (or ranked) as more “conservative” were more likely to cover female subjects than their more “liberal” counterparts, especially in articles reporting on sports.

They offered several caveats about this finding, including that the measure of political slant used is “somewhat crude.”

Clearly, much more work to be done. The next piece of the project was a 2015 article in American Sociological Review (which, as the featured article of the issue, was not paywalled by Sage). Again, without mentioning that the figure has been previously published, and with one passing reference to each of the previous papers, they motivated the analysis with the figure:

shor3

Besides not getting the figure in color, ASR readers for some reason also don’t get 1982 in the data. (The paper makes no mention of the difference in period covered, which makes sense because it never mentions any connection to the analysis in the previous paper). The ASR paper asks of this figure, “How can we account for the persistence of this disparity?”

By now I bet you’re thinking, “One way to account for this disparity is to consider the effects of political slant.” Good idea. In fact, in the depiction of the ASR paper, the rationale for this question has hardly changed at all since the SSQ paper. Here are the two passages justifying the question.

From SSQ:

Former anecdotal evidence on the relationship between newspapers’ political slant and their rate of female-subjects coverage has been inconclusive. … [describing studies by Potter (1985) and Adkins Covert and Wasburn (2007)]…

Notwithstanding these anecdotal findings, there are a number of reasons to believe that more conservative outlets would be less likely to cover female subjects and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s rights issues in a relatively negative light (Baker Beck, 1998; Brescoll and LaFrance, 2004). Therefore, they may be less likely to devote coverage to these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors […]. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally (that is, conservatively) considered to be more important or interesting, such as politics, business, and sports, and less likely to report on issues such as social welfare, education, or fashion, where according to research women have a stronger presence (Holland, 1998; Ross, 2007, 2009; Ross and Carter, 2011).

From ASR:

Some work suggests that conservative newspapers may cover women less (Potter 1985), but other studies report the opposite tendency (Adkins Covert and Wasburn 2007; Shor et al. 2014a).

Notwithstanding these inconclusive findings, there are several reasons to believe that more conservative outlets will be less likely to cover women and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s issues in a relatively negative light (Baker Beck 1998; Brescoll and LaFrance 2004), making them potentially less likely to cover these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally considered more important or interesting, such as politics, business, and sports, rather than reporting on issues such as social welfare, education, or fashion, where women have a stronger presence.

Except for a passing mention among the “other studies,” there is no connection to the previous analysis. The ASR hypothesis is:

Conservative newspapers will dedicate a smaller portion of their coverage to females.

On this question in the ASR paper, they conclude:

our analysis shows no significant relationship between newspaper coverage patterns and … a newspaper’s political tendencies.

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Still love your system?

It’s fine to report the same findings in different venues and formats. It’s fine, that is, as long as it’s clear they’re not original in the subsequent tellings. (I personally have been known to regale my students, and family members, with the same stories over and over, but I try to remember to say, “Stop me if I already told you this one” first.)

I’m not judging Shor et al. for any particular violation of specific rules or norms. And I’m not judging the quality of the work overall. But I will just make the obvious observation that this way of presenting ongoing research is wasteful of resources, misleading to readers, and hinders the development of research.

  • Wasteful because reviewers, editors, and publishers, are essentially duplicating their efforts to try to figure out what is actually to be learned from these overlapping papers — and then to repackage and sell the duplicative information as new.
  • Misleading to readers because we now have “many studies” that show the same thing (or different things), without the clear acknowledgment that they use the same data.
  • And hindering research because of the wasteful delays and duplicative expenses involved in publishing research that should be clearly presented in cumulative, transparent fashion, in a timely way — which is what we need to move science forward.

Open science

When making (or hearing) arguments against open science as impractical or unreasonable, just weigh the wastefulness, misleadingness, and obstacles to science so prevalent in the current system against whatever advantages you think it holds. We can’t have a reasonable conversation about our publishing system based on the presumption that it’s working well now.

In an open science system researchers publish their work openly (and free) with open links between different parts of the project. For example, researchers might publish one good justification for a hypothesis, with several separate analyses testing it, making clear what’s different in each test. Reviewers and readers could see the whole series. Other researchers would have access to the materials necessary for replication and extension of the work. People are judged for hiring and promotion according to the actual quality and quantity of their work and the contribution it makes to advancing knowledge, rather than through arbitrary counts of “publications” in private, paywalled journals. (The non-profit Center for Open Science is building a system like this now, and offers a free Open Science Framework, “A scholarly commons to connect the entire research cycle.”)

There are challenges to building this new system, of course, but any assessment of those challenges needs to be clear-eyed about the ridiculousness of the system we’re working under now.

Previous related posts have covered very similar publications, the opposition to open access, journal self-citation practices, and one publication’s saga.

11 Comments

Filed under Research reports

For (not against) a better publishing model

I was unhappy to see this piece on the American Sociological Association (ASA) blog by Karen Edwards, the director of publications and membership.

The post is about Sci-Hub, the international knowledge-stealing ring that allows anyone to download virtually any paywalled academic paper for free. (I wrote about it, with description of how it’s used, here.) Without naming me or linking to the post, Edwards takes issue with pieces like mine. She writes:

ASA, other scholarly societies, and our publishing partners have been dismayed by some of the published comments about Sci-Hub that present its theft as a kind of “Robin Hood” fairy tale by characterizing the “victims” as greedy publishers feasting on the profits of expensive individual article downloads by needy researchers.

My first objection is, “ASA … have been dismayed.” There have been many debates about who speaks for ASA, especially when the association took positions on legal issues (their amicus briefs are here). And I’m sure the ASA executives send out letters all the time saying, ASA thinks this or that. But when it’s about policy issues like this post (and when I don’t agree), then I think it’s wrong without some actual process involving the membership. The more extreme case, on this same issue, was when the executive officer, Sally Hillsman, sent this letter to the White House Office of Science and Technology Policy objecting to the federal government’s move toward open access — which most of us only found out about because Fabio Rojas posted it on OrgTheory.

My second objection is to the position taken. In Edwards’ view, the existence of Sci-Hub, “threatens the well-being of ASA and our sister associations as well as the peer assessment of scholarship in sociology and other academic disciplines.”

Because, in her opinion, without paywalls — and Sci-Hub presumably threatens to literally end paywalls — the system of peer reviewed scholarly output would literally die. As I pointed out in my original piece, if your entire enterprise can be brought down by the insertion of 11 characters into a URL, your system may in fact not be sustainable. Rather than attack Sci-Hub and its users, “ASA” might ask why its vendor is so unable to prevent the complete demolition of its business model by a few key strokes. But they don’t. Which leads me to the next point.

The Edwards post goes way beyond the untrue claim that there is no other way to support a peer review system, and argues that ASA needs all that paywall money to pay for all the other stuff it does. That is, not only do we need to sell papers to pay for our journal operations (and Sage profits), we also need paywalls because:

ASA is a nonprofit, so whatever revenue we receive from our journals, beyond what it costs us to do the editorial and publications work, goes directly into providing professional and educational services to our members and other scholars in our discipline (whether they are members or not). … The revenue allows ASA to provide sociologists in the field competitive research grants, pre-doctoral scholarships, specialized career development, and new digital teaching resources among many other services. It is what allows us to work effectively with other social science associations to sustain and, hopefully, grow the flow of federal research dollars to the social sciences through NSF, NIH, and many others and to defend against elimination and cuts to federal support (e.g., statistical systems and ongoing surveys) so scholars can conduct research and then publish outstanding scholarship.

In other words, as David Mamet’s character Mickey Bergman once put it, “Everybody needs money. That’s why they call it money.”

This means that finding the best model for getting sociological research to the most people with the least barriers is not as important as all the other stuff ASA does — even if the research is publicly funded. I don’t agree.

Better models

There are better ways. Contrary to popular misconceptions, we do not need to go to a system where individual researchers pay to publish their work, widening status inequalities among researchers. The basic design of the system to come is we cut out the for-profit publishers, and ask the universities and federal agencies that currently pay for research twice — once for the researchers, and once again for their published output — to agree to pay less in exchange for all of it to be open access. Instead, they pay into a central organization that administers publication funds to scholarly associations, which produce open-access research output. For a detailed proposal, read this white paper from K|N Consultants, “A Scalable and Sustainable Approach to Open Access Publishing and Archiving for Humanities and Social Sciences.” (Others are trying as well; check out the efforts of the American Anthropological Association.)

This should be easy — more access, accountability, and efficiency, for less — but it’s a difficult political problem, made all the more difficult by the dedicated efforts of those whose interests are threatened by the possibility of slicing out the profit (and other surplus) portions of the current paywall system. The math is there, but the will and the organizational efforts are lagging badly, especially in the leadership of ASA.

5 Comments

Filed under In the news

How to steal 50 million paywalled papers

5747629074_d484394fa5_b

From Flickr CC / https://flic.kr/p/9KU6T1

I’m not a criminal mastermind, so I could be wrong, but I can’t think of a way to steal more — defined by list price — per unit of effort than using sci-hub to access paywalled papers I don’t have legitimate subscription access to read.

I don’t know the scientific term for this, but there has to be some way to describe the brokenness of a system based on the ratio of effort expended to damage done. For example, there are systems where even large effort is unlikely to cause serious harm (the US nuclear weapons system), versus those where minor efforts succeed but cause acceptable levels of harm (retail shoplifting). And then there is intellectual property, where small investments can inflict billions of dollars worth of damage.

Syed Ali and I have a short piece with some links to the news on sci-hub at Contexts. Alexandra Elbakyan says it took her about three days to develop the system that now gives anyone in the world access to almost 50 million paywalled articles. Of course, a lot of people help a little, by providing her with access information from their subscribing universities, but it still seems like a very low ratio of criminal energy to face-value payoff.

Example

Now that sci-hub is in place, how hard is it for an untrained individual to steal a $40 article while risking almost nothing? As hard as it is to insert 11 characters into a paywall URL and wait a few seconds (plus your share of the one hour I expended on this post).

Here’s an example. In the journal Society, published by Springer, an article in the current issue is currently available for $39.95 to non-subscribers. But Society is a “hybrid open access” journal, which means authors or their institutions can pay to have their paper unlocked for the public (I don’t know how much it costs to unlock the article, but let’s just assume it’s a rollicking awesome deal for Springer).

So for this example I use one of the unlocked articles, so you can try this without stealing anything, if that feels more ethical to you, but it works exactly the same way for the locked ones.

The article is “Saving the World, or Saving One Life at a Time? Lessons my Career with Médecins Sans Frontières/Doctors Without Borders/MSF has Taught Me,” by Sophie Delaunay. This is the launch page for the article:

http://link.springer.com/article/10.1007/s12115-015-9965-4

From there, you can download the PDF (it would say “Buy Now” if the article weren’t unlocked) at this link:

http://link.springer.com/content/pdf/10.1007/s12115-015-9965-4.pdf

Or you can steal it for free by inserting

.sci-hub.io

into the URL, after the corporate domain thing, like this:

http://link.springer.com.sci-hub.io/content/pdf/10.1007/s12115-015-9965-4.pdf

Don’t ask me how it really works, but basically it checks if the article has been requested before — in which case it’s cached somewhere — and if it hasn’t been requested before it uses fake login information to go get it, and then it stores the copy somewhere for faster retrieval for the next person. That’s why your stolen PDF may have a little tag at the bottom that says something like “Downloaded from journal.publisher.com at Unfamous University on Recent Date.” If the article comes up instantly, you didn’t really steal it, you’re just looking at a stolen copy; if you have to watch the little thing spin first then it’s being stolen for you. With this incredibly smart design the system grows by itself, according to demand from the criminal reading public.

What’s the punishment?

I have no idea what risk Alexandra Elbakyan or her compatriots face for their work. I don’t imagine the penalty for any given user is greater than the penalty for shoplifting a $39.95 bottle of Awesome Wasteproduct. And for me sharing this, I would expect the worst thing that would happen would be a stern letter on legal letterhead. But maybe I’m naive.

Anyway, the point is, it says something about the soundness of the academic publishing edifice that doing this much damage to it is this easy.

What are the ethics?

I am aware that some reasonable people think sci-hub is very wrong, while others think the current system is very wrong. I know that many people’s current paychecks depend on this system continuing to malfunction as it does, while others never earn the higher incomes they otherwise could because they can’t get paywalled articles. I understand corporate journals add some value through their investments. And I know that the current system denies many people access to a lot of information, with social costs that are unquantifiable. And there is some inherent value to not breaking the law just in general, while there is also value to breaking bad laws symbolically. How you balance all those factors is up to you.

Some people think it’s even wrong to discuss this. What does that tell you?

5 Comments

Filed under In the news

Basic self promotion

your work

If you don’t care enough to promote your research, how can you expect others to?

These are some basic thoughts for academics promoting their research. You don’t have to be a full-time self-promoter to improve your reach and impact, but the options are daunting and I often hear people say they don’t have time to do things like run a Twitter account or write blogs. Even a relatively small effort, if well directed, can help a lot. Don’t let the perfect be the enemy of the good. It’s fine to do some things pretty well even if you can’t do everything to your ideal standard.

It’s all about making your research better — better quality, better impact. You want more people to read and appreciate your work, not just because you want fame and fortune, but because that’s what the work is for. I welcome your comments and suggestions below. 

Present yourself

Make a decent personal website and keep it up to date with information about your research, including links to accessible copies of your publications (see below). It doesn’t have to be fancy (I have a vested interest in keeping standards low in that department). I’m often surprised at how man  people are sitting behind years-old websites. 

Very often people who come across your research somewhere else will want to know more about you before they share, report on, or even cite it. Your website gives your work more credibility. Has this person published other work in this area? Taught related courses? Gotten grants? These are things people look for. It’s not vain or obnoxious to present this information, it’s your job. I recommend a good quality photo (others disagree).

Make your work available

Let people read the actual research. Publishing in open-access journals is ideal, because it’s the right thing to do and more people can read it. (My recent article in Sociological Science was downloaded several hundred times within 10 days, which is much more than I would expect from a paywalled journal.)

Whether or not you do that, share your working paper or preprint versions. This is best done in your university repository (ask your library) or public disciplinary archive. (For prominent examples, check out the University of California’s has eScholarship or Harvard’s DASH; I use the working paper site of the Maryland Population Research Center, which is run by UCLA.) If you put them on your own university website, that will allow them to show up in web searches (including Google Scholar), but they won’t be properly tagged and indexed for things like citation or grant analysis, or archived — so it’s better just to put links on your website. But don’t just link to the pay-walled version, that’s the click of death for someone just browsing around. 

Don’t be intimidated by copyright. You can almost always put up a preprint without violating any agreement (ideally you wouldn’t publish anywhere that makes you take it down afterwards), and even if you have to take it down eventually you get months or years to share it first. No one will sue you or fire you — the worst outcome is being asked to take it down, which is very rare. Don’t prioritize protecting the journal’s proprietary right to promotion over serving the public (and your career) by getting the research out there, as soon as it’s ready. To see the policies of different journals regarding self-archiving, check out the simple database at SHERPA/RoMEO.

I oppose private sites like Academia.edu and ResearchGate. These are just private companies doing what your university and its library are already doing for the public. Your paper will not be discovered more if it is on one of these sites. It will show up in a Google search if you put it on your website or, better, in a public repository.

I’m not an open access purist, at least for sociology. If you got public money to develop a cure for cancer, that’s different. For us, not everything has to be open access (books, for example), but the more it is the better, especially original research. Anyway, it would be great if sociology got more into open science (for example, with the Open Science Framework). People for whom code is big already use sites like GitHub for sharing, which is beyond me; in your neck of the woods that can be great for getting your work out, too.

Share your work

In the old days we used to order paper reprints of papers we published and literally mail them to the famous and important people we hoped would read and cite them. Nowadays you can email them a PDF. Sending a short note that says, “I thought you might be interested in this paper I wrote” is normal, reasonable, and may be considered flattering. (As long as you don’t follow up with repeated emails asking if they’ve read it yet.)

Social media

I recommend at least basic social media, Twitter and Facebook. This does not require a massive time commitment — you can always ignore them. Setting up a public profile on Twitter or a page on Facebook gives people who do use them all the time a way to link to you and share your profile. If someone wants to show their friends one of my papers on Twitter, this doesn’t require any effort on my part. They tweet, “Look at this awesome new paper @familyunequal wrote!” When people click on the link they go to my profile, which tells them who I am and links to my website. I do not have to spend time on Twitter for this to work. (I chose @familyunequal because familyinequality was too long and I didn’t want to use my name because I was determined not to use Twitter for personal stuff. I think something closest to your name is ideal, but don’t not do this because you can’t think of the perfect handle.)

Of course, an active social media presence does help draw people into your work. But even low-level attention will help: posting or tweeting links to new papers, conference presentations, other writing, etc. No need to get into snarky chitchat and following hundreds of people if you don’t want to.

To see how others are using Twitter, you can visit the list I maintain, which has more than 600 sociologists. This is useful for comparing profile and feed styles.

Other writing

People who write popular books go on book tours to promote them. People who write minor articles in sociology might send out some tweets, or share them with their friends on Facebook. In between are lots of other places you can write something to help people find and learn about your work. I recommend blogging, but that can be done different ways.

As with publications themselves, there are public and private options, and I’m not a purist. (Some of my blog posts at the Atlantic, for which I used to get paid a little, were literally sponsored by Exxon, which I didn’t notice at first because I only looked at the site with my ad-blocker on.) But again public usually works better in addition to feeling better.

There are some good organizations now that help people get their work out. In my area, for example, the Council on Contemporary Families is great (I’m on their board), producing research briefs related to new publications, and helping to bring them to the attention of journalists and editors. Others work with the Scholars Strategy Network, which helps people place Op-Eds, or others. The great non-profit site The Society Pages includes lots of avenues for writing about your research. In addition, there are blogs run by sections of the American Sociological Association (like Work in Progress, from the Organizations, Occupations, and Work section) or other professional associations, and various group blogs.

And there is Contexts (of which I’m co-editor), the general interest magazine of ASA, where we would love to hear proposals for how you can bring your research out into the open (for the magazine or our blog).

5 Comments

Filed under Me @ work

Journal self-citation practices revealed

I have written a few times about problems with peer review and publishing.* My own experience subsequently led me to the problem of coercive self-citation, defined in one study as “a request from an editor to add more citations from the editor’s journal for reasons that were not based on content.” I asked readers to send me documentation of their experiences so we could air them out. This is the result.

Introduction

First let me mention a new editorial in the journal Research Policy about the practices editors use to inflate the Journal Impact Factors, a measure of citations that many people use to compare journal quality or prestige. One of those practices is coercive self-citation. The author of that editorial, Ben Martin, cites approvingly a statement signed by a group of management and organizational studies editors:

I will refrain from encouraging authors to cite my journal, or those of my colleagues, unless the papers suggested are pertinent to specific issues raised within the context of the review. In other words, it should never be a requirement to cite papers from a particular journal unless the work is directly relevant and germane to the scientific conversation of the paper itself. I acknowledge that any blanket request to cite a particular journal, as well as the suggestion of citations without a clear explanation of how the additions address a specific gap in the paper, is coercive and unethical.

So that’s the gist of the issue. However, it’s not that easy to define coercive self-citation. In fact, we’re not doing a very good job of policing journal ethics in general, basically relying on weak enforcement of informal community standards. I’m not an expert on norms, but it seems to me that when you have strong material interests — big corporations using journals to print money at will, people desperate for academic promotions and job security, etc. — and little public scrutiny, it’s hard to regulate unethical behavior informally through norms.

The clearest cases involve asking for self-citations (a) before final acceptance, for citations (b) within the last two years and (c) without substantive reason. But there is a lot short of that to object to as well. Martin suggests that, to answer whether a practice is ethical, we need to ask: “Would I, as editor, feel embarrassed if my activities came to light and would I therefore object if I was publicly named?” (Or, as my friend Matt Huffman used to say when the used-textbook buyers came around offering us cash for books we hadn’t paid for: how would it look in grainy hidden-camera footage?) I think that journal practices, which are generally very opaque, should be exposed to public view so that unethical or questionable practices can be held up to community standards.

Reports and responses

I received reports from about a dozen journals, but a few could not be verified or were too vague. These 10 were included under very broad criteria — I know that not everyone will agree that these practices are unethical, and I’m unsure where to draw the line myself. In each case below I asked the current editor if they would care to respond to the complaint, doing my best to give the editor enough information without exposing the identity of the informant.

Here in no particular order are the excerpts of correspondence from editors, with responses from the editors to me, if any. Some details, including dates, may have been changed to protect informants. I am grateful to the informants who wrote, and I urge anyone who knows, or thinks they know, who the informants are not to punish them for speaking up.

Journal of Social and Personal Relationships (2014-2015 period)

Congratulations on your manuscript “X” having been accepted for publication in Journal of Social and Personal Relationships. … your manuscript is now “in press” … The purpose of this message is to inform you of the production process and to clarify your role in the process …

IMPORTANT NOTICE:

As you update your manuscript:

1. CITATIONS – Remember to look for relevant and recent JSPR articles to cite. As you are probably aware, the ‘quality’ of a journal is increasingly defined by the “impact factor” reported in the Journal Citation Reports (from the Web of Science). The impact factor represents a ratio of the number of times that JSPR articles are cited divided by the number of JSPR articles published. Therefore, the 20XX ratings will focus (in part) on the number of times that JSPR articles published in 20XX and 20XX are cited during the 20XX publication year. So citing recent JSPR articles from 20XX and 20XX will improve our ranking on this particular ‘measure’ of quality (and, consequently, influence how others view the journal. Of course only cite those articles relevant to the point. You can find tables of contents for the past two years at…

Response from editor Geoff MacDonald:

Thanks for your email, and for bringing that to my attention. I agree that encouraging self-citation is inappropriate and I have just taken steps to make sure it won’t happen at JSPR again.

Sex Roles (2011-2013 period)

In addition to my own report, already posted, I received an identical report from another informant. The editor, Irene Frieze, wrote: “If possible, either in this section or later in the Introduction, note how your work builds on other studies published in our journal.”

Response from incoming editor Janice D. Yoder:

As outgoing editor of Psychology of Women Quarterly and as incoming editor of Sex Roles, I have not, and would not, as policy require that authors cite papers published in the journal to which they are submitting.

I have recommended, and likely will continue to recommend, papers to authors that I think may be relevant to their work, but without any requirement to cite those papers. I try to be clear that it is in this spirit of building on existing scholarship that I make these recommendations and to make the decision of whether or not to cite them up to the author. As an editor who has decision-making power, I know that my recommendations can be interpreted as requirements (or a wise path to follow for authors eager to publish) but I can say that I have not further pressured an author whose revision fails to cite a paper I recommended.

I also have referred to authors’ reference lists as a further indication that a paper’s content is not appropriate for the journal I edit. Although never the sole indicator and never based only on citations to the specific journal I edit, if a paper is framed without any reference to the existing literature across journals in the field then it is a sign to me that the authors should seek a different venue.

I value the concerns that have been raised here, and I certainly would be open to ideas to better guide my own practices.

European Sociological Review (2013)

In a decision letter notifying the author of a minor revise-and-resubmit, the editor wrote that the author had left out of the references some recent, unspecified, publications in ESR and elsewhere (also unspecified) and suggested the author update the references.

Response from editor Melinda Mills:

I welcome the debate about academic publishing in general, scrutiny of impact factors and specifically of editorial practices.  Given the importance of publishing in our profession, I find it surprising how little is actually known about the ‘black box’ processes within academic journals and I applaud the push for more transparency and scrutiny in general about the review and publication process.  Norms and practices in academic journals appear to be rapidly changing at the moment, with journals at the forefront of innovation taking radically different positions on editorial practices. The European Sociological Review (ESR) engages in rigorous peer review and most authors agree that it strengthens their work. But there are also new emerging models such as Sociological Science that give greater discretion to editors and focus on rapid publication. I agree with Cohen that this debate is necessary and would be beneficial to the field as a whole.

It is not a secret that the review and revision process can be a long (and winding) road, both at ESR and most sociology journals. If we go through the average timeline, it generally takes around 90 days for the first decision, followed by authors often taking up to six months to resubmit the revision. This is then often followed by a second (and sometimes third) round of reviews and revision, which in the end leaves us at ten to twelve months from original submission to acceptance. My own experience as an academic publishing on other journals is that it can regularly exceed one year. During the year under peer review and revisions, relevant articles have often been published.  Surprisingly, few authors actually update their references or take into account new literature that was published after the initial submission. Perhaps this is understandable, since authors have no incentive to implement any changes that are not directly requested by reviewers.

When there has been a particularly protracted peer review process, I sometimes remind authors to update their literature review and take into account more recent publications, not only in ESR but also elsewhere.  I believe that this benefits both authors, by giving them greater flexibility in revising their manuscripts, and readers, by providing them with more up-to-date articles.  To be clear, it is certainly not the policy of the journal to coerce authors to self-cite ESR or any other outlets.  It is vital to note that we have never rejected an article where the authors have not taken the advice or opportunity to update their references and this is not a formal policy of ESR or its Editors.  If authors feel that nothing has happened in their field of research in the last year that is their own prerogative.  As authors will note, with a good justification they can – and often do – refuse to make certain substantive revisions, which is a core fundament of academic freedom.

Perhaps a more crucial part of this debate is the use and prominence of journal impact factors themselves both within our discipline and how we compare to other disciplines. In many countries there is a move to use these metrics to distribute financing to Universities, increasing the stakes of these metrics. It is important to have some sort of metric gauge of the quality and impact of our publications and discipline. But we also know that different bibliometric tools have the tendency to produce different answers and that sociology fairs relatively worse in comparison to other disciplines. Conversely, leaving evaluation of research largely weighted by peer review can produce even more skewed interpretations if the peer evaluators do not represent an international view of the discipline. Metrics and internationally recognized peer reviewers would seem the most sensible mix.

Work and Occupations (2010-2011 period)

“I would like to accept your paper for publication on the condition that you address successfully reviewer X’s comments and the following:

2. The bibliography needs to be updated somewhat … . Consider citing, however critically, the following Work and Occupations articles on the italicized themes:

[concept: four W&O papers, three from the previous two years]

[concept: two W&O papers from the previous two years]

The current editor, Dan Cornfield, thanked me and chose not to respond for publication.

Sociological Forum (2014-2015 period)

I am pleased to inform you that your article … is going to press. …

In recent years, we published an article that is relevant to this essay and I would like to cite it here. I have worked it in as follows: [excerpt]

Most authors find this a helpful step as it links their work into an ongoing discourse, and thus, raises the visibility of their article.

Response from editor Karen Cerulo:

I have been editing Sociological Forum since 2007. I have processed close to 2500 submissions and have published close to 400 articles. During that time, I have never insisted that an author cite articles from our journal. However, during the production process–when an article has been accepted and I am preparing the manuscript for the publisher–I do sometimes point out to authors Sociological Forum pieces directly relevant to their article. I send authors the full citation along with a suggestion as to where the citation be discussed or noted. I also suggest changes to key words and article abstracts, My editorial board is fully aware of this strategy. We have discussed it at many of our editorial board meetings and I have received full support for this approach. I can say, unequivocally, that I do not insist that citations be added. And since the manuscripts are already accepted, there is no coercion involved. I think it is important that you note that on any blog post related to Sociological Forum

I cannot tell you how often an author sends me a cover letter with their submission telling me that Sociological Forum is the perfect journal for their research because of related ongoing dialogues in our pages. Yet, in many of these cases, the authors fail to reference the relevant dialogues via citations. Perhaps editors are most familiar with the debates and streams of thought currently unfolding in a journal. Thus, I believe it is my job as editor and my duty to both authors and the journal to suggest that authors consider making appropriate connections.

Unnamed journal (2014)

An article was desk-rejected — that is, rejected without being sent out for peer review — with only this explanation: “In light of the appropriateness of your manuscript for our journal, your manuscript has been denied publication in X.” When the author asked for more information, a journal staff member responded with possible reasons, including that the paper did not include any references to the articles in that journal. In my view the article was clearly within the subject area of the journal. I didn’t name the journal here because this wasn’t an official editor’s decision letter and the correspondence only suggested that might be the reason for the rejction.

Sociological Quarterly (2014-2015 period)

In a revise and resubmit decision letter:

Finally, as a favor to us, please take a few moments to review back issues of TSQ to make sure that you have cited any relevant previously published work from our journal. Since our ISI Impact Factor is determined by citations, we would like to make sure papers under consideration by the journal are referring to scholarship we have previously supported.

The current editors, Lisa Waldner and Betty Dobratz, have not yet responded.

Canadian Review of Sociology (2014-2015 period)

In a letter communicating acceptance conditional on minor changes, the editor asked the author to consider citing “additional Canadian Review of Sociology articles” to “help with the journal’s visibility.”

Response from current editor Rima Wilkes:

In the case you cite, the author got a fair review and received editorial comments at the final stages of correction. The request to add a few citations to the journal was not “coercive” because in no instance was it a condition of the paper either being reviewed or published.

Many authors are aware of, and make some attempt to cite the journal to which they are submitting prior to submission and specifically target those journals and to contribute to academic debate in them.

Major publications in the discipline, such as ASR, or academia more generally, such as Science, almost never publish articles that have no reference to debates in them.

Bigger journals are in the fortunate position of having authors submit articles that engage with debates in their own journal. Interestingly, the auto-citation patterns in those journals are seen as “natural” rather than “coerced”. Smaller journals are more likely to get submissions with no citations to that journal and this is the case for a large share of the articles that we receive.

Journals exist within a larger institutional structure that has certain demands. Perhaps the author who complained to you might want to reflect on what it says about their article and its potential future if they and other authors like them do not engage with their own work.

Social Science Research (2015)

At the end of a revise-and-resubmit memo, under “Comment from the Editor,” the author was asked to include “relevant citations from Social Science Research,” with none specified.

The current editor, Stephanie Moller, has not yet responded.

City & Community (2013)

In an acceptance letter, the author was asked to approve several changes made to the manuscript. One of the changes, made to make the paper more conversant with the “relevant literature,” added a sentence with several references, one or more of which were to City & Community papers not previously included.

One of the current co-editors, Sudhir Venkatesh, declined to comment because the correspondence occurred before the current editorial teams’ tenure began.

Discussion

The Journal Impact Factor (JIF) is an especially dysfunctional part of our status-obsessed scholarly communication system. Self-citation is only one issue, but it’s a substantial one. I looked at 116 journals classified as sociology in 2014 by Web of Science (which produces the JIF), excluding some misplaced and non-English journals. WoS helpfully also offers a list excluding self-citations, but normal JIF rankings do not make this exclusion. (I put the list here.) On average removing self-citations reduces the JIF by 14%. But there is a lot of variation. One would expect specialty journals to have high self-citation counts because the work they publish is closely related. Thus Armed Forces and Society has a 31% self-citation rate, as does Work & Occupations (25%). But others, like Gender & Society (13%) and Journal of Marriage and Family (15%) are not high. On the other hand, you would expect high-visibility journals to have high self-citation rates, if they publish better, more important work; but on this list the correlation between JIF and self-citation rate is -.25. Here is that relationship for the top 50 journals by JIF, with the top four by self-citation labeled (the three top-JIF journals at bottom-right are American Journal of Sociology, Annual Review of Sociology, and American Sociological Review).

journal stats.xlsx

The top four self-citers are low-JIF journals. Two of them are mentioned above, but I have no idea what role self-citation encouragement plays in that. There are other weird distortions in JIFs that may or may not be intentional. Consider the June 2015 issue of Sociological Forum, which includes a special section, “Commemorating the Fiftieth Anniversary of the Civil Rights Laws.” That issue, just a few months old, as of yesterday includes the 9 most-cited articles that the journal published in the last two years. In fact, these 9 pieces have all been cited 9 times, all by each other — and each article currently has the designation of “Highly Cited Paper” from Web of Science (with a little trophy icon). The December 2014 issue of the same journal also gave itself an immediate 24 self-citations for a special “forum” feature. I am not suggesting the journal runs these forum discussion features to pump up its JIF, and I have nothing bad to say about their content — what’s wrong with a symposium-style feature in which the authors respond to each other’s work? But these cases illustrate what’s wrong with using citation counts to rank journals. As Martin’s piece explains, the JIF is highly susceptible to manipulation beyond self-citation promotion, for example by tinkering with the pre-publication queue of online articles, publishing editorial review essays, and of course outright fraud.

Anyway, my opinion is that journal editors should never add or request additional citations without clearly stated substantive reasons related to the content of the research and unrelated to the journal in which they are published. I realize that reasonable people disagree about this — and I encourage readers to respond in the comments below. I also hope that any editor would be willing to publicly stand by their practices, and I urge editors and journal management to let authors and readers see what they’re doing as much as possible.

However, I also think our whole journal system is pretty irreparably broken, so I put limited stock in the idea of improving its operation. My preference is to (1) fire the commercial publishers, (2) make research publication open-access with a very low bar for publication; and (3) create an organized system of post-publication review to evaluate research quality, with (4) republishing or labeling by professional associations to promote what’s most important.

* Some relevant posts cover long review delays for little benefit; the problem of very similar publications; the harm to science done by arbitrary print-page limits; gender segregation in journal hierarchies; and how easy it is to fake data.

12 Comments

Filed under Uncategorized