Tag Archives: open science

Update on SocArXiv and social science without walls

social science without walls

Meanwhile, over at SocArXiv, we’re working on revolutionizing the research process and how we communicate about it in the social sciences. You can follow the exploits of the SocArXiv project on our blog SocOpen. There you can read, most recently:

That’s the update!

Leave a comment

Filed under Me @ work

Do we get tenure for this?

My photo. For the occasion I titled it, Openness. https://flic.kr/p/FShb6d

For the occasion I titled this photo of Utah “Openness.”

Colleen Flaherty at Inside Higher Ed has written up the American Sociological Association’s committee report, “What Counts? Evaluating Public Communication in Tenure and Promotion.”

I was once a member of the ASA Subcommittee on the Evaluation Of Social Media and Public Communication In Sociology, which was chaired by Leslie McCall when they produced the report. (It is a subcommittee of the task force on engaging sociology, convened by then-President Annette Lareau.)

It’s worth reading the whole article, which also includes comments from Sara Ovink, McCall and me, in addition to the report. Having thought about this issue a little, I was happy to respond to Flaherty’s request for comment. These are the full comments I sent her, from which she quoted in the article:

1. We don’t need credit toward promotion for every thing we do. Scholars who take a public-facing stance in their work often find that it enhances the quality and quantity of their work in the traditional fields of assessment (research, teaching, service), so that separately rewarding the public work is not always necessary. I don’t need credit for having a popular blog – that work has led to new research ideas, better feedback on my research, better grad students, teaching ideas, invitations to contribute to policy, and book contracts.

2. We’d all love to be promoted for authoring a great tweet but no one wants to be fired for a bad one. Assessment of public engagement needs to be holistic and qualitative, taking into the account quality, quantity, and impact of the work. Simplistic quantitative metrics will not be useful.

3. It is also important to value and reward openness in our routine work, such as posting working papers, publishing in open access journals, sharing replication files, and disseminating open teaching materials. Public engagement does not need to mean separate activities and products, but can mean taking a public-facing stance in our existing work.

The SocArxiv project is one outcome of these conversations (links to latest infosubmit a paper), especially relating to point #3 above. Academics who open up their work should be recognized for that contribution to the public good and for promoting the future of academia. In that spirit also I proposed a rule change for the ASA Dissertation Award, which now includes this:

To be eligible for the ASA Dissertation Award, candidates’ dissertations must be publicly available in Dissertation Abstracts International or a comparable outlet. Dissertations that are not available in this fashion will not be considered for the award.

It’s hard to change everything, but it’s not that hard to make some important changes in the right direction. Rewarding engagement and openness is an important step in the right direction.

1 Comment

Filed under Uncategorized

SocArXiv in development

Print

Readers of the blog have become familiar with my complaints about our publishing system (scan the academia tag for examples): it’s needlessly slow, inefficient, hierarchical, profit-driven, exploitative, and also doesn’t work well.

Simple example: a junior scholar sends a perfectly reasonable sociology paper to a high-status journal. The editor commissions three anonymous reviews, and four months later the paper is rejected on the basis of a few hours of their volunteer labor. This increases the value — and subscription price — of the for-profit journal, because its high rejection rate is a key selling point. The author will now revise the paper (some of the advice was good, but nothing to suggest the analysis or conclusions were actually wrong) and send it to another journal, where three more anonymous reviewers — having no access to the previous round of review and exchange — will donate a few hours labor to a different for-profit publisher. In a few months we’ll find out what happens. Repeat. The outcome will be a good paper, improved by the process, published 1-3 years after it was written — during which time the paper, the code, and the data, were not available to anyone else. It will be available for $39.95 to non-academics, but most of the people who are aware of it will be able to read it because their institutions buy it as part of a giant bundle of journals from the publisher. The writer may get a job and, later, tenure. Thus, the process produces a good paper, inaccessible to most of the world, as well as a person dependent on the process, one with the institutional position and incentive to perpetuate it for another generation. There’s more wrong than this, but that’s the basic idea. The system is not completely non-functional, it’s just very bad.

With current technology, replacing our outdated journal system is not difficult. We could save vast amounts of money while providing free, faster access to research for everyone. Like our healthcare system, academic publishing is laboring under the weight of supporting its usurious middlemen. Getting them out of the way is a problem of politics and organization, not technology or cost. We academics do all the work already – research, writing, reviewing, editing – contributing our labor without compensation to giant companies that claim to be helping us get and keep our incredibly privileged jobs. But most of us are supported directly or indirectly by the state and our students (or their banks), not the journal publishers. We don’t need most of what the journal publishers do any more, and working for them is degrading our research, making it less innovative and transformative, less engaging and engaged, less open and accountable.

SocArXiv

The people in math and physics developed a workaround for this system in arXiv.org, where people share papers before they are peer-reviewed. Other paper servers have arisen as well, including some run by universities and some run privately for profit, some in specific disciplines. But there is a need for a new general, open-access, open-source, paper server for the social sciences, one that encourages linking and sharing data and code, that serves its research to an open metadata system, and that provides the foundation for a post-publication review system. I hope that SocArXiv will enable us to save research from the journal system. Once its built, anyone will be able to use it to organize their own peer-review community, to select and publish papers (though not exclusively), to review and comment on each other’s work — and to discover, cite, value, and share research unimpeded. We will be able to do this because of the brilliant efforts of the Center for Open Science (which is already developing a new preprint server) and SHARE (“a free, open, data set about research and scholarly activities across their life cycle”).

And we hope you’ll get involved: sharing research, reviewing, moderating, editing, mobilizing. Lots to do, but the good news is we’re doing most of this work already.

SocArXiv won’t take over this blog, though. You can read more about the project, and see the steering committee, in the announcement of our partnership. For updates, you can follow us on Twitter or Facebook, or email to add your name to the mailing list. In fact, you can also make a tax-deductible contribution to SocArXiv through the University of Maryland here.

When your paper is ready, check SocArXiv.org.

5 Comments

Filed under Me @ work

Life table says divorce rate is 52.7%

After the eternal bliss, there are two ways out of marriage: divorce or death.

I have posted my code and calculations for divorce rates using the 2010-2012 American Community Survey as an Open Science Framework project. The files there should be enough to get you started if you want to make multiple-decrement life tables for divorce or other things.

Because the American Community survey records year of marriage, and divorce and widowhood, it’s perfectly set up for a multiple-decrement life table approach. A multiple-decrement life table uses the rate of each of two exits for each year of the original state (in this case marriage), to project the probability of either exit happening at or after a given year of marriage. It’s a projection of current rates, not a prediction of what will happen. So, if you write a headline that says, “your chance of divorce if you marry today is 52.7%,” that would be too strong, because it doesn’t take into account that the world might change. Also, people are different.

The divorce rate of 52.7% can accurately be described like this: “If current divorce and widowhood rates remain unchanged, 52.7% of today’s marriages would end in divorce before widowhood.” Here is a figure showing the probability of divorce at or after each year of the model:

div-mdlt

So there’s 52.7% up at year 0. Marriages that make it to year 15 have a 30% chance of eventually divorcing, and so on.

Because the ACS doesn’t record anything about the spouses of divorce or widowed people, I don’t know who was married to whom, such as age, education, race-ethnicity, or even the sex of the spouse. So the estimates differ by sex as well as other characteristics. I estimated a bunch of them in the spreadsheet file on the OSF site, but here are the bottom lines, showing, for example, that second or higher-order marriages have a 58.5% projected divorce rate and Blacks have a 64.2% divorce rate, compared with 52.9% for Whites.

div-mdlt-tab

(The education ones should be taken with a grain of salt because education levels can change but this assumes they’re static.)

Check the divorce tag for other posts and papers on divorce.

The ASA-style citation to the OSF project would be like this:  Cohen, Philip N. 2016. “Multiple-Decrement Life Table Estimates of Divorce Rates.” Retrieved (osf.io/zber3).

18 Comments

Filed under Me @ work

Eran Shor responds

On May 8 I wrote about three articles by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena. (My post is here; the articles, in chronological order, are available in full here, here, and here.)

Eran Shor, associate professor of sociology at McGill University and first author of the papers in question, has sent me the following response, which I agreed to post unedited. I have not heard from the other authors, and Shor does not claim to speak for them here. I’m not responding to this now, except to say that I stand by the original post. Feel free to make (moderated) comments below.

Eran Shor’s response

We would like to thank Philip N. Cohen for posting this response to his blog, unedited.

Philip N. Cohen wrote a post in which he targets three of our recently published articles and claims that these are overlapping and misleading to readers. On the one hand, Cohen clarifies that he is “not judging Shor et al. for any particular violation of specific rules or norms” and “not judging the quality of the work overall.” However, in his conclusions he speaks about “overlapping papers”, “selling duplicative information as new” and “misleading readers”. We feel this terminology more than just hints at intentional wrongdoing. The first response to the blog outright accuses us of self-plagiarism, deceitfulness, and having questionable ethics, which we believe is directly the result of Cohen’s suggestive language.

Below, we explain why we feel that these accusations are unfair and mostly unsubstantiated. We also reflect on the debate over open science and on the practice of writing blogs that make this kind of accusations without first giving authors the chance to respond to them.

As for the three articles in question, we invite readers to read these for themselves and judge whether each makes a unique and original contribution. To quickly summarize these contributions as we see them:

  • The first article, published in Journalism Studies (2014), focuses on the historical development of women’s media representation, presenting new data that goes back to the 19th Century and discussing the historical shifts and differences between various sections of the newspaper.
  • The second article, published in Social Science Quarterly (2014) begins to tackle possible explanations for the persistent gap in representation and specifically focuses on the question of political partisanship in the media and its relationship with gendered coverage patterns. We use two separate measures of newspapers’ political slant and conduct bivariate analyses that examine the association between partisanships and representation.
  • The third article was published in the American Sociological Review (2015). In it, we conducted a wide-scope examination of a large variety of possible explanations for the persistent gender gap in newspapers. We presented a large gamut of new and original data and analyses (both bivariate and multivariate), which examined explanations such as “real-world” inequalities, newsroom composition, and various other factors related to both the newspapers themselves and the cities and states in which they are located.

Of note, these three articles are the result of more than seven years of intensive data collection from a wide variety of sources and multiple analyses, leading to novel contributions to the literature. We felt (and still do) that these various contributions could not have been clearly fleshed out in one article, not even a longer article, such as the ones published by the American Sociological Review.

Now for the blog: Did SSQ really “scoop” ASR?

First, where we agree with Cohen’s critique: the need to indicate clearly when one is presenting a piece of data or a figure that already appeared in another paper. Here, we must concede that we have failed, although certainly not intentionally. The reason we dropped the ball on this one is the well-established need to try to conceal one’s identity as long as a paper is under review, in order to maintain the standard of double-blind review. Clearly, we should have been more careful in checking the final copies of the SSQ and ASR articles and add a clarification stating that the figure already appeared in an earlier paper. Alternatively, we could have also dropped the figure from the paper and simply refer readers to the JS paper, as this figure was not an essential component of either of the latter two papers. As for the issue of the missing year in the ASR paper, this was simply a matter of re-examining the data and noting that the data for 1982 was not strong enough, as it relied on too few data points and a smaller sample of newspapers, and therefore was not equivalent to data from the following years. We agree, however, that we should have clarified this in the paper.

That said, as Cohen also notes in his blog, in each of the two latter papers (SSQ and ASR), the figure in question is really just a minor descriptive element, the starting point for a much larger—and different in each article—set of data and analyses. In both cases, we present the figure at the beginning of the paper in order to motivate the research questions and subsequent analyses and we do not claim that this is one of our novel findings or contributions. The reason we reproduced this figure is that reviewers of previous versions of the paper asked us to demonstrate the persistent gap between women and men in the news. The figure serves as a parsimonious and relatively elegant visual way to do that, but we also presented data in the ASR paper from a much larger set of newspapers that establishes this point. Blind-review norms prevented us from referring readers to our own work in a clear way (that is, going beyond simply including a citation to the work). Still, as noted above, we take full responsibility for not making sure to add a clearer reference to the final version. But we would like to emphasize that there was no intentional deceit here, but rather a simple act of omission. This should be clear from the fact that we had nothing to gain from not referring to our previous work and to the fact that these previous findings motivated the new analyses.

Duplicated analyses?

As for the other major charge against us in the blog, it is summarized in the following paragraph:

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Here we feel that Cohen is presenting a false picture of what we did in the two papers. First, the SSQ paper actually used two measures of political slant. For both measures, we presented bivariate analyses of the relationship between slant and coverage (which indeed shows a weak to moderate relationship). It is important to stress here that this somewhat rudimentary analysis was simply a matter of data availability: The bivariate cross-sectional analysis was the best analysis we could perform at the time when the paper was accepted for publication (end of 2013), given the data that we had available. However, during the following two years, leading to the publication of the article in ASR at the end of 2015, we engaged in an extensive and time-consuming process of collecting and coding additional data. This effort allowed us to code longitudinal data on important characteristics of newspapers (e.g., identity of editors and publishers and various city and state-related characteristics) for a subset of the newspapers in our sample and for six consecutive years.

And so, as is often the case when moving from a cross-sectional bivariate analysis (SSQ) to a longitudinal multivariate one (ASR), the previous weak relationship that we found for slant basically disappeared, or rather, it became non-significant. In the ASR paper, we did refer readers to the results of our previous study, although without providing details about the analysis itself, because we did not wish to single out our own work in a paragraph that also briefly cites the results of other similar studies (Potter (1985); Adkins Covert and Wasburn (2007)). Perhaps we should have clarified better in the final draft that we previously examined this relationship in a cross-sectional bivariate analysis. But this is a far cry from the allegation that we were reproducing the same analysis, or that we intentionally concealed evidence, both of which are simply false.

To be clear: While the SSQ paper presented a cross-sectional bivariate analysis, in the ASR paper we used a somewhat different sample of papers to perform a longitudinal multivariate analysis, in which newspaper slant was but one of many variables. These are important differences (leading to different results), which we believe any careful reader of the two papers can easily detect. Readers of the ASR article will also notice that the testing of the political slant question is not a major point in the paper, nor is it presented as such. In fact, this variable was originally included as merely a control variable, but reviewers asked that we flesh out the theoretical logic behind including it, which we did. We therefore feel that Cohen’s above comment (in parentheses)—“In addition to whatever else is new in the third paper”—is unfair to the point of being disingenuous. It ignores the true intent of the paper and its (many) unique contributions, for the purpose of more effectively scoring his main point.

As for the paragraph that Cohen cites in the blog, in which we use very similar language to theoretically justify the inclusion of the newspaper slant variable in the ASR analysis, we would like to clarify that this was simply the most straightforward way of conveying this theoretical outline. We make no pretense or implication whatsoever that this passage adds anything new or very important in its second iteration. And again, we do refer the readers to the previous work (including our own), which found conflicting evidence regarding this question in cross sectional bivariate analyses. Knowledge is advanced by building on previous work (your own and others) and adding to it, and this is exactly what we do here.

Misleading readers and selling duplicated information as new?

Given our clarifications above, we feel that the main charges against us are unjust. The three papers in question are by no means overlapping duplications (although the one particular descriptive figure is). In fact, none of the analyses in SSQ and ASR are overlapping, and each paper made a unique contribution at the time it was published. Furthermore, the charges that we are “selling duplicate information as new” and “misleading readers” clearly imply that we have been duplicitous and dishonest in this research effort. It is not surprising that such inflammatory language ended up inciting respondents to the blog to accuse us flatly of self-plagiarism, deceitfulness, and questionable ethics.

In response to such accusations, we once again wish to state very clearly that at no point did we intend to deceive readers or intentionally omit information about previous publications. While we admit to erring in not clearly mentioning in the caption of the figure that it was already reported in a previous study, this was an honest mistake, and one which did not and could not be to our benefit in any way whatsoever. An error of omission it was, but not a violation of any ethical norms.

Is open access the solution?

We would also like to comment on the more general claim of the blog about the system being broken and the solution lying in open access briefly. We would actually like to express our firm support for Cohen’s general efforts to promote open science. We also agree with the need to both carefully monitor and rethink our publication system, as well as with the call for open access to journals and a more transparent reviewing process. All of these would bring important benefits and the conversation over them should continue.

However, we question the assumption that in this particular case an open access system would have solved the problem of not mentioning that our figure appeared in previous articles. As we note above, this omission was actually triggered by the blind review system and our attempts to avoid revealing our identity during the reviewing process (and later on to our failure to remember to add more direct references to our previous work in the final version of the article). But surely, most reviewers who work at academic institutions have access through their local libraries to a broad range of journals, including ones that are behind paywalls (and certainly to most mainstream journals). Our ASR paper was reviewed by nine different anonymous reviewers, as well as by the editorial board. It seems reasonable to assume that virtually all of them would have been able to access the previous papers published in mainstream journals. So the fact that our previous articles were published in journals with paywalls seems neither here nor there for the issues Cohen raises about our work.

A final word: On the practice of making personal accusations in a blog without first soliciting a response from the authors

The commonly accepted way to proceed in our field is that when one scholar wishes to criticize the published work of another, they write a comment to the journal that published the article. The journal then solicits a response from the authors who are being criticized, and a third party then decides whether this is worthy of publication. Readers then get the chance to read both the critique and the response at the same time and decide which point of view they find more convincing. It seems to us that this is the decent way of proceeding in cases where there are different points of view or disagreement over scholarly findings and interpretations. Moreover, when the critique involves charges (or hints) of unethical behavior and academic dishonesty. In such cases, this norm of basic decency seems to us to be even more important. In our case, Professor Cohen did not bother to approach us, and did not ask us to respond to the accusations against us. In fact, we only learned about the post by happenstance, when a friend directed our attention to the blog. When we responded and asked Cohen to make some clarifications to the original posting, we were turned down, although Cohen kindly agreed to publish this comment to his blog, unedited, for which we are thankful.

Of course, online blogs are not expected to honor the norms of civilized scholarly debate to the letter. They are a different kind of forum. Clearly, they have their advantages in terms of both speed and accessibility, and they form an important part of the current academic discourse. But it seems to us that, especially in such cases where allegations of ethically questionable conduct are being made, the authors of blogs should adopt a more careful approach. After all, this is not merely a matter of academic disagreements; people’s careers and reputations are at stake. We would like to suggest that in such cases the authors of blogs should err on the side of caution and allow authors to defend themselves against accusations in advance and not after the fact, when much of the damage has already been done.

 

27 Comments

Filed under Me @ work

How broken is our system (hit me with that figure again edition)

Why do sociologists publish in academic journals? Sometimes it seems improbable that the main goal is sharing information and advancing scientific knowledge. Today’s example of our broken system, brought to my attention by Neal Caren, concerns three papers by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena (Shor et al).

May 13, 2016 update: Eran Shor has sent me a response, which I posted here.

In a paywalled 2013 paper in Journalism Studies, the team used an analysis of names appearing in newspapers to report the gender composition of people mentioned. They analyzed the New York Times back to 1880, and then a larger sample of 13 newspapers from 1982 through 2005. Here’s one of their figures:

shor1

The 2013 paper was a descriptive analysis, establishing that men are mentioned more than women over time.

In a paywalled 2014 article in Social Science Quarterly (SSQ) the team followed up. Except for a string-cite mention in the methods section, the second paper makes no reference to the first, giving no indication that the two are part of a developing project. They use this figure to motivate the analysis in the second paper, with no acknowledgment that it also appeared in the first:

shor2

Shor et al. 2014 asked,

How can we account for the consistency of these disparities? One possible factor that may explain at least some of these consistent gaps may be the political agendas and choices of specific newspapers.

Their hypothesis was:

H1: Newspapers that are typically classified as more liberal will exhibit a higher rate of female-subjects’ coverage than newspapers typically classified as conservative.

After analyzing the data, they concluded:

The proposition that liberal newspapers will be more likely to cover female subjects was not supported by our findings. In fact, we found a weak to moderate relationship between the two variables, but this relationship is in the opposite direction: Newspapers recognized (or ranked) as more “conservative” were more likely to cover female subjects than their more “liberal” counterparts, especially in articles reporting on sports.

They offered several caveats about this finding, including that the measure of political slant used is “somewhat crude.”

Clearly, much more work to be done. The next piece of the project was a 2015 article in American Sociological Review (which, as the featured article of the issue, was not paywalled by Sage). Again, without mentioning that the figure has been previously published, and with one passing reference to each of the previous papers, they motivated the analysis with the figure:

shor3

Besides not getting the figure in color, ASR readers for some reason also don’t get 1982 in the data. (The paper makes no mention of the difference in period covered, which makes sense because it never mentions any connection to the analysis in the previous paper). The ASR paper asks of this figure, “How can we account for the persistence of this disparity?”

By now I bet you’re thinking, “One way to account for this disparity is to consider the effects of political slant.” Good idea. In fact, in the depiction of the ASR paper, the rationale for this question has hardly changed at all since the SSQ paper. Here are the two passages justifying the question.

From SSQ:

Former anecdotal evidence on the relationship between newspapers’ political slant and their rate of female-subjects coverage has been inconclusive. … [describing studies by Potter (1985) and Adkins Covert and Wasburn (2007)]…

Notwithstanding these anecdotal findings, there are a number of reasons to believe that more conservative outlets would be less likely to cover female subjects and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s rights issues in a relatively negative light (Baker Beck, 1998; Brescoll and LaFrance, 2004). Therefore, they may be less likely to devote coverage to these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors […]. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally (that is, conservatively) considered to be more important or interesting, such as politics, business, and sports, and less likely to report on issues such as social welfare, education, or fashion, where according to research women have a stronger presence (Holland, 1998; Ross, 2007, 2009; Ross and Carter, 2011).

From ASR:

Some work suggests that conservative newspapers may cover women less (Potter 1985), but other studies report the opposite tendency (Adkins Covert and Wasburn 2007; Shor et al. 2014a).

Notwithstanding these inconclusive findings, there are several reasons to believe that more conservative outlets will be less likely to cover women and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s issues in a relatively negative light (Baker Beck 1998; Brescoll and LaFrance 2004), making them potentially less likely to cover these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally considered more important or interesting, such as politics, business, and sports, rather than reporting on issues such as social welfare, education, or fashion, where women have a stronger presence.

Except for a passing mention among the “other studies,” there is no connection to the previous analysis. The ASR hypothesis is:

Conservative newspapers will dedicate a smaller portion of their coverage to females.

On this question in the ASR paper, they conclude:

our analysis shows no significant relationship between newspaper coverage patterns and … a newspaper’s political tendencies.

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Still love your system?

It’s fine to report the same findings in different venues and formats. It’s fine, that is, as long as it’s clear they’re not original in the subsequent tellings. (I personally have been known to regale my students, and family members, with the same stories over and over, but I try to remember to say, “Stop me if I already told you this one” first.)

I’m not judging Shor et al. for any particular violation of specific rules or norms. And I’m not judging the quality of the work overall. But I will just make the obvious observation that this way of presenting ongoing research is wasteful of resources, misleading to readers, and hinders the development of research.

  • Wasteful because reviewers, editors, and publishers, are essentially duplicating their efforts to try to figure out what is actually to be learned from these overlapping papers — and then to repackage and sell the duplicative information as new.
  • Misleading to readers because we now have “many studies” that show the same thing (or different things), without the clear acknowledgment that they use the same data.
  • And hindering research because of the wasteful delays and duplicative expenses involved in publishing research that should be clearly presented in cumulative, transparent fashion, in a timely way — which is what we need to move science forward.

Open science

When making (or hearing) arguments against open science as impractical or unreasonable, just weigh the wastefulness, misleadingness, and obstacles to science so prevalent in the current system against whatever advantages you think it holds. We can’t have a reasonable conversation about our publishing system based on the presumption that it’s working well now.

In an open science system researchers publish their work openly (and free) with open links between different parts of the project. For example, researchers might publish one good justification for a hypothesis, with several separate analyses testing it, making clear what’s different in each test. Reviewers and readers could see the whole series. Other researchers would have access to the materials necessary for replication and extension of the work. People are judged for hiring and promotion according to the actual quality and quantity of their work and the contribution it makes to advancing knowledge, rather than through arbitrary counts of “publications” in private, paywalled journals. (The non-profit Center for Open Science is building a system like this now, and offers a free Open Science Framework, “A scholarly commons to connect the entire research cycle.”)

There are challenges to building this new system, of course, but any assessment of those challenges needs to be clear-eyed about the ridiculousness of the system we’re working under now.

Previous related posts have covered very similar publications, the opposition to open access, journal self-citation practices, and one publication’s saga.

12 Comments

Filed under Research reports

For (not against) a better publishing model

I was unhappy to see this piece on the American Sociological Association (ASA) blog by Karen Edwards, the director of publications and membership.

The post is about Sci-Hub, the international knowledge-stealing ring that allows anyone to download virtually any paywalled academic paper for free. (I wrote about it, with description of how it’s used, here.) Without naming me or linking to the post, Edwards takes issue with pieces like mine. She writes:

ASA, other scholarly societies, and our publishing partners have been dismayed by some of the published comments about Sci-Hub that present its theft as a kind of “Robin Hood” fairy tale by characterizing the “victims” as greedy publishers feasting on the profits of expensive individual article downloads by needy researchers.

My first objection is, “ASA … have been dismayed.” There have been many debates about who speaks for ASA, especially when the association took positions on legal issues (their amicus briefs are here). And I’m sure the ASA executives send out letters all the time saying, ASA thinks this or that. But when it’s about policy issues like this post (and when I don’t agree), then I think it’s wrong without some actual process involving the membership. The more extreme case, on this same issue, was when the executive officer, Sally Hillsman, sent this letter to the White House Office of Science and Technology Policy objecting to the federal government’s move toward open access — which most of us only found out about because Fabio Rojas posted it on OrgTheory.

My second objection is to the position taken. In Edwards’ view, the existence of Sci-Hub, “threatens the well-being of ASA and our sister associations as well as the peer assessment of scholarship in sociology and other academic disciplines.”

Because, in her opinion, without paywalls — and Sci-Hub presumably threatens to literally end paywalls — the system of peer reviewed scholarly output would literally die. As I pointed out in my original piece, if your entire enterprise can be brought down by the insertion of 11 characters into a URL, your system may in fact not be sustainable. Rather than attack Sci-Hub and its users, “ASA” might ask why its vendor is so unable to prevent the complete demolition of its business model by a few key strokes. But they don’t. Which leads me to the next point.

The Edwards post goes way beyond the untrue claim that there is no other way to support a peer review system, and argues that ASA needs all that paywall money to pay for all the other stuff it does. That is, not only do we need to sell papers to pay for our journal operations (and Sage profits), we also need paywalls because:

ASA is a nonprofit, so whatever revenue we receive from our journals, beyond what it costs us to do the editorial and publications work, goes directly into providing professional and educational services to our members and other scholars in our discipline (whether they are members or not). … The revenue allows ASA to provide sociologists in the field competitive research grants, pre-doctoral scholarships, specialized career development, and new digital teaching resources among many other services. It is what allows us to work effectively with other social science associations to sustain and, hopefully, grow the flow of federal research dollars to the social sciences through NSF, NIH, and many others and to defend against elimination and cuts to federal support (e.g., statistical systems and ongoing surveys) so scholars can conduct research and then publish outstanding scholarship.

In other words, as David Mamet’s character Mickey Bergman once put it, “Everybody needs money. That’s why they call it money.”

This means that finding the best model for getting sociological research to the most people with the least barriers is not as important as all the other stuff ASA does — even if the research is publicly funded. I don’t agree.

Better models

There are better ways. Contrary to popular misconceptions, we do not need to go to a system where individual researchers pay to publish their work, widening status inequalities among researchers. The basic design of the system to come is we cut out the for-profit publishers, and ask the universities and federal agencies that currently pay for research twice — once for the researchers, and once again for their published output — to agree to pay less in exchange for all of it to be open access. Instead, they pay into a central organization that administers publication funds to scholarly associations, which produce open-access research output. For a detailed proposal, read this white paper from K|N Consultants, “A Scalable and Sustainable Approach to Open Access Publishing and Archiving for Humanities and Social Sciences.”

This should be easy — more access, accountability, and efficiency, for less — but it’s a difficult political problem, made all the more difficult by the dedicated efforts of those whose interests are threatened by the possibility of slicing out the profit (and other surplus) portions of the current paywall system. The math is there, but the will and the organizational efforts are lagging badly, especially in the leadership of ASA.

8 Comments

Filed under In the news