Tag Archives: academia

Advice for and about ASA

Last summer the incoming American Sociological Association President, Michèle Lamont, asked me to offer some advice to ASA about open access publishing issues. It was an open-ended request, and I didn’t know how to go about it. My understanding of ASA is that it is not well outfitted as a change agent; it’s much more likely to respond to external developments in its ecosystem than to take the lead, especially when its revenue stream is at stake. Nevertheless, lots of good people work in and around the association, and it has great capacity. (I am involved myself, as co-editor of the ASA magazine Contexts, as chair-elect of the Family Section, and as secretary treasurer of the Population Section.) So I wrote a short essay on what ASA might do, or what its members might do or demand of it.

It’s not coincidental that this is posted on the SocArXiv blog, SocOpen, which is part of that changing external environment that I hope will lead to ASA adapting for the better. I believe that devoting my energy to this project is producing something tangible for research and scholarly communication, while also pressuring ASA (and maybe other associations) to move in the right direction.

I hope you’ll read it on SocOpen.

Leave a comment

Filed under Me @ work

No paper, no news (#NoPaperNoNews)

nopapernonews

In the abstract, the missions of science and science reporting align. But in the market arena they both have incentives to cheat, stretch, and rush. Members of the two groups sometimes have joint interests in pumping up research findings. Reporters feel pressure to get scoops on cutting edge research, research that they want to appear important as well as true — so they may want to avoid a pack of whining, jealous tweed-wearers seen as more hindrance than help. And researchers (and their press offices) want to get splashy, positive coverage of their discoveries that isn’t bogged down by the objections of all those whining, jealous tweed-wearers either.

Despite some bad incentives, the alliance between good researchers and good reporters may be growing stronger these days, with the potential to help stem the daily tide of ridiculous stories. Partly due to social media interaction, it’s become easier for researchers to ping reporters directly about their research, or about a problem with a story; and it’s become easier for reporters to find and contact researchers to cover their work, and for comment or analysis of research they’re covering. The result is an increase in research reporting that is skeptical and exploratory rather than just exuberant or exaggerated. Some of this rapid interaction between experts researchers and expert reporters, in fact, operates as a layer of improved peer review, subjecting potentially important research to more extreme vetting at just the right moment.

Those of us in these relationships who want to do the right thing really do need each other. And one way to help is to encourage the development of prosocial norms and best practices. To that end, I think we should agree on a No Paper No News pact. Let’s pledge:

  • If you are a researcher, or university press office, and you want your research covered, free up the paper — and insist that news coverage link to it. Make the journal open a copy, or post a preprint somewhere like SocArXiv.
  • If you are a reporter or editor, and you want to cover new research, insist that the researcher, university, or journal, provide open access to its content — then link to it.
  • If you are a consumer of science or research reporting, and you want to evaluate news coverage, look for a clear link to an open access copy of the paper. If you don’t see one, flag it with the #NoPaperNoNews tag, and pressure the news/research collaborators to comply with this basic best practice.

This is not an extremist approach. I’m not saying we must require complete open access to all research (something I would like to see, of course). And this is not dissing the peer review process, which, although seriously flawed in its application, is basically a good idea. But peer review is nothing like a guarantee that research is good, and it’s even less a guarantee that research as translated through a news release and then a reporter and an editor is reliable and responsible. #NoPaperNoNews recognizes that when research enters the public arena through the news media, it may become important in unanticipated ways, and it may be subject to more irresponsible uses, misunderstandings, and exploitation. Providing direct access to the research product itself makes it possible for concerned people to get involved and speak up if something is going wrong. It also enhances the positive impact of the research reporting, which is great when the research is good.

Plenty of reporters, editors, researchers, and universities practice some version of this, but it’s inconsistent. For example, the American Sociological Association currently has a news release up about a paper in the American Sociological Review, by Paula England,  Jonathan Bearak, Michelle Budig, and Melissa Hodges. And, as is now usually the case, that paper was selected by the ASR editors to be the freebie of the month, so it’s freely available. But the news release (which also only lists England as an author) doesn’t link to the paper. Some news reports link to the free copy but some don’t. ASA could easily add boilerplate language to their news releases, firmly suggesting that coverage link to the original paper, which is freely available.

Some publishers support this kind of approach, laying out free copies of breaking news research. But some don’t. In those cases, reporters and researchers can work together to make preprint versions available. In the social sciences, you can easily and immediately put a preprint on SocArXiv and add the link to the news report (to see which version you are free to post — pre-review, post-review, pre-edit, post-edit, etc. — consult your author agreement or look up the journal in the Sherpa/Romeo database.)

This practice is easy to enforce because it’s simple and technologically easy. When a New York Times reporter says, “I’d love to cover this research. Just tell me where I can link to the paper,” most researchers, universities, and publishers will jump to accommodate them. The only people who will want to block it are bad actors: people who don’t want their research scrutinized, reporters who don’t want to be double-checked, publishers who prioritize income over the public good.

#NoPaperNoNews

2 Comments

Filed under In the news

Do we get tenure for this?

My photo. For the occasion I titled it, Openness. https://flic.kr/p/FShb6d

For the occasion I titled this photo of Utah “Openness.”

Colleen Flaherty at Inside Higher Ed has written up the American Sociological Association’s committee report, “What Counts? Evaluating Public Communication in Tenure and Promotion.”

I was once a member of the ASA Subcommittee on the Evaluation Of Social Media and Public Communication In Sociology, which was chaired by Leslie McCall when they produced the report. (It is a subcommittee of the task force on engaging sociology, convened by then-President Annette Lareau.)

It’s worth reading the whole article, which also includes comments from Sara Ovink, McCall and me, in addition to the report. Having thought about this issue a little, I was happy to respond to Flaherty’s request for comment. These are the full comments I sent her, from which she quoted in the article:

1. We don’t need credit toward promotion for every thing we do. Scholars who take a public-facing stance in their work often find that it enhances the quality and quantity of their work in the traditional fields of assessment (research, teaching, service), so that separately rewarding the public work is not always necessary. I don’t need credit for having a popular blog – that work has led to new research ideas, better feedback on my research, better grad students, teaching ideas, invitations to contribute to policy, and book contracts.

2. We’d all love to be promoted for authoring a great tweet but no one wants to be fired for a bad one. Assessment of public engagement needs to be holistic and qualitative, taking into the account quality, quantity, and impact of the work. Simplistic quantitative metrics will not be useful.

3. It is also important to value and reward openness in our routine work, such as posting working papers, publishing in open access journals, sharing replication files, and disseminating open teaching materials. Public engagement does not need to mean separate activities and products, but can mean taking a public-facing stance in our existing work.

The SocArxiv project is one outcome of these conversations (links to latest infosubmit a paper), especially relating to point #3 above. Academics who open up their work should be recognized for that contribution to the public good and for promoting the future of academia. In that spirit also I proposed a rule change for the ASA Dissertation Award, which now includes this:

To be eligible for the ASA Dissertation Award, candidates’ dissertations must be publicly available in Dissertation Abstracts International or a comparable outlet. Dissertations that are not available in this fashion will not be considered for the award.

It’s hard to change everything, but it’s not that hard to make some important changes in the right direction. Rewarding engagement and openness is an important step in the right direction.

1 Comment

Filed under Uncategorized

SocArXiv in development

Print

Readers of the blog have become familiar with my complaints about our publishing system (scan the academia tag for examples): it’s needlessly slow, inefficient, hierarchical, profit-driven, exploitative, and also doesn’t work well.

Simple example: a junior scholar sends a perfectly reasonable sociology paper to a high-status journal. The editor commissions three anonymous reviews, and four months later the paper is rejected on the basis of a few hours of their volunteer labor. This increases the value — and subscription price — of the for-profit journal, because its high rejection rate is a key selling point. The author will now revise the paper (some of the advice was good, but nothing to suggest the analysis or conclusions were actually wrong) and send it to another journal, where three more anonymous reviewers — having no access to the previous round of review and exchange — will donate a few hours labor to a different for-profit publisher. In a few months we’ll find out what happens. Repeat. The outcome will be a good paper, improved by the process, published 1-3 years after it was written — during which time the paper, the code, and the data, were not available to anyone else. It will be available for $39.95 to non-academics, but most of the people who are aware of it will be able to read it because their institutions buy it as part of a giant bundle of journals from the publisher. The writer may get a job and, later, tenure. Thus, the process produces a good paper, inaccessible to most of the world, as well as a person dependent on the process, one with the institutional position and incentive to perpetuate it for another generation. There’s more wrong than this, but that’s the basic idea. The system is not completely non-functional, it’s just very bad.

With current technology, replacing our outdated journal system is not difficult. We could save vast amounts of money while providing free, faster access to research for everyone. Like our healthcare system, academic publishing is laboring under the weight of supporting its usurious middlemen. Getting them out of the way is a problem of politics and organization, not technology or cost. We academics do all the work already – research, writing, reviewing, editing – contributing our labor without compensation to giant companies that claim to be helping us get and keep our incredibly privileged jobs. But most of us are supported directly or indirectly by the state and our students (or their banks), not the journal publishers. We don’t need most of what the journal publishers do any more, and working for them is degrading our research, making it less innovative and transformative, less engaging and engaged, less open and accountable.

SocArXiv

The people in math and physics developed a workaround for this system in arXiv.org, where people share papers before they are peer-reviewed. Other paper servers have arisen as well, including some run by universities and some run privately for profit, some in specific disciplines. But there is a need for a new general, open-access, open-source, paper server for the social sciences, one that encourages linking and sharing data and code, that serves its research to an open metadata system, and that provides the foundation for a post-publication review system. I hope that SocArXiv will enable us to save research from the journal system. Once its built, anyone will be able to use it to organize their own peer-review community, to select and publish papers (though not exclusively), to review and comment on each other’s work — and to discover, cite, value, and share research unimpeded. We will be able to do this because of the brilliant efforts of the Center for Open Science (which is already developing a new preprint server) and SHARE (“a free, open, data set about research and scholarly activities across their life cycle”).

And we hope you’ll get involved: sharing research, reviewing, moderating, editing, mobilizing. Lots to do, but the good news is we’re doing most of this work already.

SocArXiv won’t take over this blog, though. You can read more about the project, and see the steering committee, in the announcement of our partnership. For updates, you can follow us on Twitter or Facebook, or email to add your name to the mailing list. In fact, you can also make a tax-deductible contribution to SocArXiv through the University of Maryland here.

When your paper is ready, check SocArXiv.org.

5 Comments

Filed under Me @ work

Perspective on sociology’s academic hierarchy and debate

2389844916_e9cc979eb9_o

Keep that gate. (Photo by Rob Nunn, https://flic.kr/p/4DbzCG)

It’s hard to describe the day I got my first acceptance to American Sociological Review. There was no social media back then so I have no record of my reaction, but I remember it as the day — actually, the moment, as the conditional acceptance slipped out of the fax machine — that I learned I was getting tenure, that I would have my dream job for the rest of my life, with a personal income in the top 10 percent of the country for a 9-month annual commitment. At that moment I was not inclined to dwell on the flaws in our publishing system, its arbitrary qualities, or the extreme status hierarchy it helps to construct.

In a recent year ASR considered more than 700 submitted articles and rejected 90% or more of them (depending on how you count). Although many people dispute the rationality of this distinction, publishing in our association’s flagship journal remains the most universally agreed-upon indicator of scholarship quality. And it is rare. I randomly sampled 50 full-time sociology faculty listed in the 2016 ASA Guide to Graduate Departments of Sociology (working in the U.S. and Canada), and found that 9, or 18%, had ever published a research article in ASR.

Not only is it rare, but publication in ASR is highly concentrated in high-status departments (and individuals). While many departments have no faculty that have published in ASR (I didn’t count these, but there are a lot), some departments are brimming with them. In my own, second-tier department, I count 16 out of 27 faculty with publications in ASR (59%), while at a top-tier, article-oriented department such as the University of North Carolina at Chapel Hill (where I used to work), 19 of the 25 regular faculty, or 76%, have published in ASR (many of them multiple times).

Without diminishing my own accomplishment (or that of my co-authors), or the privilege that got me here, I should be clear that I don’t think publication in high-status journals is a good way to identify and reward scholarly accomplishment and productivity. The reviews and publication decisions are too uneven (although obviously not completely uncorrelated with quality), and the limit on articles published is completely arbitrary in an era in which the print journal and its cost-determined page-limit is simply ridiculous.

We have a system that is hierarchical, exclusive, and often arbitrary — and the rewards it doles out are both large and highly concentrated.

I say all this to put in perspective the grief I have gotten for publicly criticizing an article published in ASR. In that post, I specifically did not invoke ethical violations or speculate on the motivations or non-public behavior of the authors, about whom I know nothing. I commented on the flaws in the product, not the process. And yet a number of academic critics responded vociferously to what they perceive as the threats this commentary posed to the academic careers and integrity of the authors whose work I discussed. Anonymous critics called my post “obnoxious, childish, time wasting, self promoting,” and urged sociologists to “shun” me. I have been accused of embarking on a “vigilante mission.” In private, a Jewish correspondent referred me to the injunction in Leviticus against malicious gossip in an implicit critique of my Jewish ethics.*

In the 2,500-word response I published on my site — immediately and unedited — I was accused of lacking “basic decency” for not giving the authors a chance to prepare a response before I posted the criticism on my blog. The “commonly accepted way” when “one scholar wishes to criticize the published work of another,” I was told, is to go through a process of submitting a “comment” to the journal that published the original work, which “solicits a response from the authors who are being criticized,” and it’s all published together, generally years later. (Never mind that journals have no obligation or particular inclination to publish such debates, as I have reported on previously, when ASR declined for reasons of “space” to publish a comment pointing out errors that were not disputed by the editors.)

This desire to maintain gatekeepers to police and moderate our discussion of public work is not only quaint, it is corrosive. Despite pointing out uncomfortable facts (which my rabbinical correspondent referred to as the “sin of true speech for wrongful purpose”), my criticism was polite, reasoned, with documentation — and within the bounds of what would be considered highly civil discourse in any arena other than academia, apparently. Why are the people whose intellectual work is most protected most afraid of intellectual criticism?

In Christian Smith’s book, The Sacred Project of American Sociology (reviewed here), which was terrible, he complains explicitly about the decline of academic civilization’s gatekeepers:

The Internet has created a whole new means by which the traditional double-blind peer-review system may be and already is in some ways, I believe, being undermined. I am referring here to the spate of new sociology blogs that have sprung up in recent years in which handfuls of sociologists publicly comment upon and often criticize published works in the discipline. The commentary published on these blogs operates outside of the gatekeeping systems of traditional peer review. All it takes to make that happen is for one or more scholars who want to amplify their opinions into the blogosphere to set up their own blogs and start writing.

Note he is complaining about people criticizing published work, yet believes such criticism undermines the blind peer-review system. This fear is not rational. The terror over public discussion and debate — perhaps especially among the high-status sociologists who happen to also be the current gatekeepers — probably goes a long way toward explaining our discipline’s pitiful response to the crisis of academic publishing. According to my (paywalled) edition of the Oxford English Dictionary, the definition of “publish” is “to make public.” And yet to hear these protests you would think the whisper of a public comment poses an existential threat to the very people who have built their entire profession around publishing (though, to be consistent, it’s mostly hidden from the public behind paywalls).

This same fear leads many academics to insist on anonymity even in normal civil debates over research and our profession. Of course there are risks, as there tend to be when people make important decisions about things that matter. But at some point, the fear of repression for expressing our views (which is legitimate in some rare circumstances) starts looking more like avoidance of the inconvenience or discomfort of having to stand behind our words. If academics are really going to lose their jobs for getting caught saying, “Hey, I think you were too harsh on that paper,” then we are definitely having the wrong argument.

“After all,” wrote Eran Shor, “this is not merely a matter of academic disagreements; people’s careers and reputations are at stake.” Of course, everyone wants to protect their reputation — and everyone’s reputation is always at stake. But let’s keep this in perspective. For those of us at or near the top of this prestige hierarchy — tenured faculty at research universities — damage to our reputations generally poses a threat only within a very narrow bound of extreme privilege. If my reputation were seriously damaged, I would certainly lose some of the perks of my job. But the penalty would also include a decline in students to advise, committees to serve on, and journals to edit — and no change in that lifetime job security with a top-10% salary for a 9-month commitment. Of course, for those of us whose research really is that important, anything that harms our ability to work in exactly the way that we want to has costs that simply cannot be measured. I wouldn’t know about that.

But if we want the high privilege of an academic career — and if we want a discipline that can survive under scrutiny from an increasingly impatient public and deepening market penetration — we’re going to have to be willing to defend it.

* I think if random Muslims have to denounce ISIS then Jews who cite Leviticus on morals should have to explain whether — despite the obvious ethical merit to some of those commands — they also support the killing of animals just because they have been raped by humans.

5 Comments

Filed under Uncategorized

Eran Shor responds

On May 8 I wrote about three articles by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena. (My post is here; the articles, in chronological order, are available in full here, here, and here.)

Eran Shor, associate professor of sociology at McGill University and first author of the papers in question, has sent me the following response, which I agreed to post unedited. I have not heard from the other authors, and Shor does not claim to speak for them here. I’m not responding to this now, except to say that I stand by the original post. Feel free to make (moderated) comments below.

Eran Shor’s response

We would like to thank Philip N. Cohen for posting this response to his blog, unedited.

Philip N. Cohen wrote a post in which he targets three of our recently published articles and claims that these are overlapping and misleading to readers. On the one hand, Cohen clarifies that he is “not judging Shor et al. for any particular violation of specific rules or norms” and “not judging the quality of the work overall.” However, in his conclusions he speaks about “overlapping papers”, “selling duplicative information as new” and “misleading readers”. We feel this terminology more than just hints at intentional wrongdoing. The first response to the blog outright accuses us of self-plagiarism, deceitfulness, and having questionable ethics, which we believe is directly the result of Cohen’s suggestive language.

Below, we explain why we feel that these accusations are unfair and mostly unsubstantiated. We also reflect on the debate over open science and on the practice of writing blogs that make this kind of accusations without first giving authors the chance to respond to them.

As for the three articles in question, we invite readers to read these for themselves and judge whether each makes a unique and original contribution. To quickly summarize these contributions as we see them:

  • The first article, published in Journalism Studies (2014), focuses on the historical development of women’s media representation, presenting new data that goes back to the 19th Century and discussing the historical shifts and differences between various sections of the newspaper.
  • The second article, published in Social Science Quarterly (2014) begins to tackle possible explanations for the persistent gap in representation and specifically focuses on the question of political partisanship in the media and its relationship with gendered coverage patterns. We use two separate measures of newspapers’ political slant and conduct bivariate analyses that examine the association between partisanships and representation.
  • The third article was published in the American Sociological Review (2015). In it, we conducted a wide-scope examination of a large variety of possible explanations for the persistent gender gap in newspapers. We presented a large gamut of new and original data and analyses (both bivariate and multivariate), which examined explanations such as “real-world” inequalities, newsroom composition, and various other factors related to both the newspapers themselves and the cities and states in which they are located.

Of note, these three articles are the result of more than seven years of intensive data collection from a wide variety of sources and multiple analyses, leading to novel contributions to the literature. We felt (and still do) that these various contributions could not have been clearly fleshed out in one article, not even a longer article, such as the ones published by the American Sociological Review.

Now for the blog: Did SSQ really “scoop” ASR?

First, where we agree with Cohen’s critique: the need to indicate clearly when one is presenting a piece of data or a figure that already appeared in another paper. Here, we must concede that we have failed, although certainly not intentionally. The reason we dropped the ball on this one is the well-established need to try to conceal one’s identity as long as a paper is under review, in order to maintain the standard of double-blind review. Clearly, we should have been more careful in checking the final copies of the SSQ and ASR articles and add a clarification stating that the figure already appeared in an earlier paper. Alternatively, we could have also dropped the figure from the paper and simply refer readers to the JS paper, as this figure was not an essential component of either of the latter two papers. As for the issue of the missing year in the ASR paper, this was simply a matter of re-examining the data and noting that the data for 1982 was not strong enough, as it relied on too few data points and a smaller sample of newspapers, and therefore was not equivalent to data from the following years. We agree, however, that we should have clarified this in the paper.

That said, as Cohen also notes in his blog, in each of the two latter papers (SSQ and ASR), the figure in question is really just a minor descriptive element, the starting point for a much larger—and different in each article—set of data and analyses. In both cases, we present the figure at the beginning of the paper in order to motivate the research questions and subsequent analyses and we do not claim that this is one of our novel findings or contributions. The reason we reproduced this figure is that reviewers of previous versions of the paper asked us to demonstrate the persistent gap between women and men in the news. The figure serves as a parsimonious and relatively elegant visual way to do that, but we also presented data in the ASR paper from a much larger set of newspapers that establishes this point. Blind-review norms prevented us from referring readers to our own work in a clear way (that is, going beyond simply including a citation to the work). Still, as noted above, we take full responsibility for not making sure to add a clearer reference to the final version. But we would like to emphasize that there was no intentional deceit here, but rather a simple act of omission. This should be clear from the fact that we had nothing to gain from not referring to our previous work and to the fact that these previous findings motivated the new analyses.

Duplicated analyses?

As for the other major charge against us in the blog, it is summarized in the following paragraph:

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Here we feel that Cohen is presenting a false picture of what we did in the two papers. First, the SSQ paper actually used two measures of political slant. For both measures, we presented bivariate analyses of the relationship between slant and coverage (which indeed shows a weak to moderate relationship). It is important to stress here that this somewhat rudimentary analysis was simply a matter of data availability: The bivariate cross-sectional analysis was the best analysis we could perform at the time when the paper was accepted for publication (end of 2013), given the data that we had available. However, during the following two years, leading to the publication of the article in ASR at the end of 2015, we engaged in an extensive and time-consuming process of collecting and coding additional data. This effort allowed us to code longitudinal data on important characteristics of newspapers (e.g., identity of editors and publishers and various city and state-related characteristics) for a subset of the newspapers in our sample and for six consecutive years.

And so, as is often the case when moving from a cross-sectional bivariate analysis (SSQ) to a longitudinal multivariate one (ASR), the previous weak relationship that we found for slant basically disappeared, or rather, it became non-significant. In the ASR paper, we did refer readers to the results of our previous study, although without providing details about the analysis itself, because we did not wish to single out our own work in a paragraph that also briefly cites the results of other similar studies (Potter (1985); Adkins Covert and Wasburn (2007)). Perhaps we should have clarified better in the final draft that we previously examined this relationship in a cross-sectional bivariate analysis. But this is a far cry from the allegation that we were reproducing the same analysis, or that we intentionally concealed evidence, both of which are simply false.

To be clear: While the SSQ paper presented a cross-sectional bivariate analysis, in the ASR paper we used a somewhat different sample of papers to perform a longitudinal multivariate analysis, in which newspaper slant was but one of many variables. These are important differences (leading to different results), which we believe any careful reader of the two papers can easily detect. Readers of the ASR article will also notice that the testing of the political slant question is not a major point in the paper, nor is it presented as such. In fact, this variable was originally included as merely a control variable, but reviewers asked that we flesh out the theoretical logic behind including it, which we did. We therefore feel that Cohen’s above comment (in parentheses)—“In addition to whatever else is new in the third paper”—is unfair to the point of being disingenuous. It ignores the true intent of the paper and its (many) unique contributions, for the purpose of more effectively scoring his main point.

As for the paragraph that Cohen cites in the blog, in which we use very similar language to theoretically justify the inclusion of the newspaper slant variable in the ASR analysis, we would like to clarify that this was simply the most straightforward way of conveying this theoretical outline. We make no pretense or implication whatsoever that this passage adds anything new or very important in its second iteration. And again, we do refer the readers to the previous work (including our own), which found conflicting evidence regarding this question in cross sectional bivariate analyses. Knowledge is advanced by building on previous work (your own and others) and adding to it, and this is exactly what we do here.

Misleading readers and selling duplicated information as new?

Given our clarifications above, we feel that the main charges against us are unjust. The three papers in question are by no means overlapping duplications (although the one particular descriptive figure is). In fact, none of the analyses in SSQ and ASR are overlapping, and each paper made a unique contribution at the time it was published. Furthermore, the charges that we are “selling duplicate information as new” and “misleading readers” clearly imply that we have been duplicitous and dishonest in this research effort. It is not surprising that such inflammatory language ended up inciting respondents to the blog to accuse us flatly of self-plagiarism, deceitfulness, and questionable ethics.

In response to such accusations, we once again wish to state very clearly that at no point did we intend to deceive readers or intentionally omit information about previous publications. While we admit to erring in not clearly mentioning in the caption of the figure that it was already reported in a previous study, this was an honest mistake, and one which did not and could not be to our benefit in any way whatsoever. An error of omission it was, but not a violation of any ethical norms.

Is open access the solution?

We would also like to comment on the more general claim of the blog about the system being broken and the solution lying in open access briefly. We would actually like to express our firm support for Cohen’s general efforts to promote open science. We also agree with the need to both carefully monitor and rethink our publication system, as well as with the call for open access to journals and a more transparent reviewing process. All of these would bring important benefits and the conversation over them should continue.

However, we question the assumption that in this particular case an open access system would have solved the problem of not mentioning that our figure appeared in previous articles. As we note above, this omission was actually triggered by the blind review system and our attempts to avoid revealing our identity during the reviewing process (and later on to our failure to remember to add more direct references to our previous work in the final version of the article). But surely, most reviewers who work at academic institutions have access through their local libraries to a broad range of journals, including ones that are behind paywalls (and certainly to most mainstream journals). Our ASR paper was reviewed by nine different anonymous reviewers, as well as by the editorial board. It seems reasonable to assume that virtually all of them would have been able to access the previous papers published in mainstream journals. So the fact that our previous articles were published in journals with paywalls seems neither here nor there for the issues Cohen raises about our work.

A final word: On the practice of making personal accusations in a blog without first soliciting a response from the authors

The commonly accepted way to proceed in our field is that when one scholar wishes to criticize the published work of another, they write a comment to the journal that published the article. The journal then solicits a response from the authors who are being criticized, and a third party then decides whether this is worthy of publication. Readers then get the chance to read both the critique and the response at the same time and decide which point of view they find more convincing. It seems to us that this is the decent way of proceeding in cases where there are different points of view or disagreement over scholarly findings and interpretations. Moreover, when the critique involves charges (or hints) of unethical behavior and academic dishonesty. In such cases, this norm of basic decency seems to us to be even more important. In our case, Professor Cohen did not bother to approach us, and did not ask us to respond to the accusations against us. In fact, we only learned about the post by happenstance, when a friend directed our attention to the blog. When we responded and asked Cohen to make some clarifications to the original posting, we were turned down, although Cohen kindly agreed to publish this comment to his blog, unedited, for which we are thankful.

Of course, online blogs are not expected to honor the norms of civilized scholarly debate to the letter. They are a different kind of forum. Clearly, they have their advantages in terms of both speed and accessibility, and they form an important part of the current academic discourse. But it seems to us that, especially in such cases where allegations of ethically questionable conduct are being made, the authors of blogs should adopt a more careful approach. After all, this is not merely a matter of academic disagreements; people’s careers and reputations are at stake. We would like to suggest that in such cases the authors of blogs should err on the side of caution and allow authors to defend themselves against accusations in advance and not after the fact, when much of the damage has already been done.

 

27 Comments

Filed under Me @ work

How broken is our system (hit me with that figure again edition)

Why do sociologists publish in academic journals? Sometimes it seems improbable that the main goal is sharing information and advancing scientific knowledge. Today’s example of our broken system, brought to my attention by Neal Caren, concerns three papers by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena (Shor et al).

May 13, 2016 update: Eran Shor has sent me a response, which I posted here.

In a paywalled 2013 paper in Journalism Studies, the team used an analysis of names appearing in newspapers to report the gender composition of people mentioned. They analyzed the New York Times back to 1880, and then a larger sample of 13 newspapers from 1982 through 2005. Here’s one of their figures:

shor1

The 2013 paper was a descriptive analysis, establishing that men are mentioned more than women over time.

In a paywalled 2014 article in Social Science Quarterly (SSQ) the team followed up. Except for a string-cite mention in the methods section, the second paper makes no reference to the first, giving no indication that the two are part of a developing project. They use this figure to motivate the analysis in the second paper, with no acknowledgment that it also appeared in the first:

shor2

Shor et al. 2014 asked,

How can we account for the consistency of these disparities? One possible factor that may explain at least some of these consistent gaps may be the political agendas and choices of specific newspapers.

Their hypothesis was:

H1: Newspapers that are typically classified as more liberal will exhibit a higher rate of female-subjects’ coverage than newspapers typically classified as conservative.

After analyzing the data, they concluded:

The proposition that liberal newspapers will be more likely to cover female subjects was not supported by our findings. In fact, we found a weak to moderate relationship between the two variables, but this relationship is in the opposite direction: Newspapers recognized (or ranked) as more “conservative” were more likely to cover female subjects than their more “liberal” counterparts, especially in articles reporting on sports.

They offered several caveats about this finding, including that the measure of political slant used is “somewhat crude.”

Clearly, much more work to be done. The next piece of the project was a 2015 article in American Sociological Review (which, as the featured article of the issue, was not paywalled by Sage). Again, without mentioning that the figure has been previously published, and with one passing reference to each of the previous papers, they motivated the analysis with the figure:

shor3

Besides not getting the figure in color, ASR readers for some reason also don’t get 1982 in the data. (The paper makes no mention of the difference in period covered, which makes sense because it never mentions any connection to the analysis in the previous paper). The ASR paper asks of this figure, “How can we account for the persistence of this disparity?”

By now I bet you’re thinking, “One way to account for this disparity is to consider the effects of political slant.” Good idea. In fact, in the depiction of the ASR paper, the rationale for this question has hardly changed at all since the SSQ paper. Here are the two passages justifying the question.

From SSQ:

Former anecdotal evidence on the relationship between newspapers’ political slant and their rate of female-subjects coverage has been inconclusive. … [describing studies by Potter (1985) and Adkins Covert and Wasburn (2007)]…

Notwithstanding these anecdotal findings, there are a number of reasons to believe that more conservative outlets would be less likely to cover female subjects and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s rights issues in a relatively negative light (Baker Beck, 1998; Brescoll and LaFrance, 2004). Therefore, they may be less likely to devote coverage to these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors […]. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally (that is, conservatively) considered to be more important or interesting, such as politics, business, and sports, and less likely to report on issues such as social welfare, education, or fashion, where according to research women have a stronger presence (Holland, 1998; Ross, 2007, 2009; Ross and Carter, 2011).

From ASR:

Some work suggests that conservative newspapers may cover women less (Potter 1985), but other studies report the opposite tendency (Adkins Covert and Wasburn 2007; Shor et al. 2014a).

Notwithstanding these inconclusive findings, there are several reasons to believe that more conservative outlets will be less likely to cover women and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s issues in a relatively negative light (Baker Beck 1998; Brescoll and LaFrance 2004), making them potentially less likely to cover these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally considered more important or interesting, such as politics, business, and sports, rather than reporting on issues such as social welfare, education, or fashion, where women have a stronger presence.

Except for a passing mention among the “other studies,” there is no connection to the previous analysis. The ASR hypothesis is:

Conservative newspapers will dedicate a smaller portion of their coverage to females.

On this question in the ASR paper, they conclude:

our analysis shows no significant relationship between newspaper coverage patterns and … a newspaper’s political tendencies.

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Still love your system?

It’s fine to report the same findings in different venues and formats. It’s fine, that is, as long as it’s clear they’re not original in the subsequent tellings. (I personally have been known to regale my students, and family members, with the same stories over and over, but I try to remember to say, “Stop me if I already told you this one” first.)

I’m not judging Shor et al. for any particular violation of specific rules or norms. And I’m not judging the quality of the work overall. But I will just make the obvious observation that this way of presenting ongoing research is wasteful of resources, misleading to readers, and hinders the development of research.

  • Wasteful because reviewers, editors, and publishers, are essentially duplicating their efforts to try to figure out what is actually to be learned from these overlapping papers — and then to repackage and sell the duplicative information as new.
  • Misleading to readers because we now have “many studies” that show the same thing (or different things), without the clear acknowledgment that they use the same data.
  • And hindering research because of the wasteful delays and duplicative expenses involved in publishing research that should be clearly presented in cumulative, transparent fashion, in a timely way — which is what we need to move science forward.

Open science

When making (or hearing) arguments against open science as impractical or unreasonable, just weigh the wastefulness, misleadingness, and obstacles to science so prevalent in the current system against whatever advantages you think it holds. We can’t have a reasonable conversation about our publishing system based on the presumption that it’s working well now.

In an open science system researchers publish their work openly (and free) with open links between different parts of the project. For example, researchers might publish one good justification for a hypothesis, with several separate analyses testing it, making clear what’s different in each test. Reviewers and readers could see the whole series. Other researchers would have access to the materials necessary for replication and extension of the work. People are judged for hiring and promotion according to the actual quality and quantity of their work and the contribution it makes to advancing knowledge, rather than through arbitrary counts of “publications” in private, paywalled journals. (The non-profit Center for Open Science is building a system like this now, and offers a free Open Science Framework, “A scholarly commons to connect the entire research cycle.”)

There are challenges to building this new system, of course, but any assessment of those challenges needs to be clear-eyed about the ridiculousness of the system we’re working under now.

Previous related posts have covered very similar publications, the opposition to open access, journal self-citation practices, and one publication’s saga.

12 Comments

Filed under Research reports