One case of very similar publications, with some implications and suggestions

This post deals with problems in academic research publishing. It’s off the usual topic of the blog, although the publications in question do concern families and inequality. I decided to publish it here rather than try to place it somewhere else because I thought it might be controversial, and I want to take personal responsibility for it. I welcome discussion of these questions here in the comments, or in other forums where these issues are pertinent — you are welcome to repost this, with attribution.

The case here is a pair of articles by John R. Hipp, an associate professor of Criminology, Law and Society at U.C. Irvine. The two articles are:

Little of substance is learned from one that could not be learned from the other, they contain many nearly-identical passages, and they both claim to make the same major original contributions. This isn’t the most extreme case like this ever published, but it’s the most obvious one I’ve noticed that involves a major sociology journal. Without attributing any cause or motivation, we can call these two “very similar publications.”

The practice of publishing VSPs:

  • Wastes the time of editors, reviewers, and future researchers.
  • Takes up valuable space in journals, space for which other researchers are competing for their publications and to enhance their careers.
  • Misleads reviewers and administrators who are evaluating and comparing publication records.
  • Misleads the research community by creating a false impression of the weight of original research on the topic (e.g., “many studies show”).
It is also just one symptom of a pretty broken publication and promotion system in academia, which I will return to later. I don’t know anything about the history of these papers, or the author’s situation or motives, so I limit myself to discussing the content of the papers. After discussing the case, I have a few suggestions. I’m sorry this is so long.

The case

Here are the two abstracts. I don’t know an elegant way to show these side beside except a screen grab (click for higher resolution); I numbered the sentences in the abstracts.

I highlighted the major difference between the two articles: the Criminology article analyzes perceptions of crime in “microneighborhoods,” while the Social Problems piece analyzes reported violent crime rates in Census tracts (and adds a measure of short-term crime rate change). This is a substantive difference, and it involves using different versions of the American Housing Survey. But the abstracts are not written as a sequence of incremental discoveries; they claim to make a number of the same innovations and discoveries. Substantively, the difference could have been handled with one additional table, or even a hefty footnote. (The first paper reports that the violent crime rate and residents’ perceptions are correlated at about .70). In any event, the difference is not a significant part of the motivation for either article, as the abstracts make clear.

Here are the outlines of the articles, with Hipp’s headings. As you can see, the Social Problems article includes a measure of short-term crime rate change, and the Criminology article includes a section on sensitivity analyses. They are not identical (the first paper also includes a methodological appendix).

The second paper, from Social Problems, does acknowledge the existence of the first, but not in a way that would truly communicate the relationship between them. It notes (p. 411):

Recent scholarship has suggested that … residents’ perceptions of crime in the microneighborhood can differentially affect in-mobility and out-mobility for different racial/ethnic groups (Hipp 2010b). We extend this literature by using official violent crime rates within the broader neighborhood as measured by the census tract.

Later (p. 414), the Social Problems paper minimizes the findings of the Criminology paper:

One study provided suggestive evidence of disproportionate out-mobility using information on the perceptions of crime among residents living within a micro-neighborhood of the nearest 11 housing units (Hipp 2010b). Whereas white households perceiving more crime were more likely to move within four years, black and Latino households showed no such tendency (Hipp 2010b). Furthermore, whites living in microneighborhoods with a general perception of more crime were also more likely to leave the unit, whereas Latino and black households again showed no such tendency. This evidence of the importance of perceptions of crime within a small micro-environment is important, but it cannot assess whether such perceptions accurately capture the crime environment of the micro-area, nor whether the crime environment of the broader neighborhood is also important. The present study addresses these limitations.

The Criminology paper was published online in August 2010, and the Social Problems paper is dated August 2011, so I can’t tell if the reviewers for the Social Problems paper had access to a published copy of the first at the time of their review.

If the second paper constitutes a genuine additional contribution, it would be reasonable to publish it separately, making clear that it represents a methodological variation of findings already known. But instead the second paper announces the same contributions as the first. In fact, as the text from the sections titled “summary” in the introduction of both articles show, two out of three of the “important contributions to the literature” are identical:

The third contribution differs. However, by the publication of the second paper, the first two “important contributions” are no longer original (the generally understood meaning of “contribution”).

Later, in the conclusions, the assertions of originality (the “key/crucial implication” and “important takeaway”) are nearly identical. Here are some excepts from the conclusions:

The genuine relationship between the two papers is not revealed.

There is no substitute for reading the text with human eyes. However, there also are tools for displaying and analyzing similarity in documents. The U.S. Dept. of Health and Human Services Office of Research Integrity provides a reference to the Plagiarism Resource Center (once) at the University of Virginia, which distributes a program called WCopyfind, an open-source Windows-based document comparison tool.

After converting these two articles to text documents, and removing the tables, references, methodological appendix, and extraneous page numbers and other fragments, I subjected them to the WCopyfind comparison. These are the parameters I used, which were recommended for cases in which minor editing is presumed.

Shortest Phrase to Match: 6
Ignore Punctuation: No
Ignore Outer Punctuation: Yes
Ignore Letter Case: Yes
Skip Long Words: No
Most Imperfections to Allow: 0

The analysis found 2,235 words that were in duplicate 6-word strings, accounting for 20% of each article’s text. For example, here are two passages, with the strings that the algorithm flagged in red:

The present study provides an important corrective to the large volume of prior research finding a positive relationship between the size of racial/ethnic minority groups in a neighborhood and the rate of crime at one point in time and assuming that the causal direction runs from the presence of such minorities to higher rates of crime.

Prior research frequently has found a relationship between the presence of racial/ethnic minorities in a neighborhood and the rate of crime at one point in time. Although they sometimes posit different mechanisms, these studies almost always conclude that the causal direction runs from the presence of such minorities to higher rates of crime.

On the one hand, we can see that the passages are more similar than the red text implies, but on the other hand there are times when the algorithm appears to just catch phrases that occur in this line of research, such as “are more likely to move into.” If you set the shortest phrase to 5 words, the program flags 23% of the text; at 10 words it flags 12%.

The nice thing about the program is it creates a side-by-side comparison of the entire documents, with the common text strings linked, so you can click on the text in one article and see where it appears in the other, in context. I have put this side-by-side file here for your perusal. The full articles (behind paywalls) are linked above.

(Odd aside: the Criminology paper is in the first person singular, while the Social Problems paper is written in the first person plural, although both have only one author.)

What is this?

This is not a duplicate paper, although it includes a large amount of copied text, which would normally be called “self-plagiarism,” defined by Miguel Roig like this:

Whereas plagiarism involves the presentation of others’ ideas, text, data, images, etc., as the products of our own creation, self-plagiarism, occurs when we decide to reuse in whole or in part our own previously disseminated ideas, text, data, etc without any indication of their prior dissemination. Perhaps the most commonly-known form of self-plagiarism is duplicate publication, but other forms exist and include redundant publication, augmented publication, also known as meat extender, and segmented publication, also known as salami, piecemeal, or fragmented publication. The key feature in all forms of self-plagiarism is the presence of significant overlap between publications and, most importantly, the absence of a clear indication as to the relationship between the various duplicates or related papers.

Thinly-sliced “salami” articles do not necessarily include any duplication, but rather just tiny increments of scientific progress. There is some of that here, as well as elements of a “meat extender” case, in which small amounts of additional data or analysis are added — without a transparent disclaimer and rationale. However, regardless of the differences in analysis, the texts — especially the framing and concluding bread around the salami — are too similar to be justified.

That last distinction in Roig’s definition, the “key feature,” is what’s important here. Using a little boilerplate theoretical language in related work, or a very similar equation or description of variables when using the same datasets, doesn’t undermine the value of the work, as long as the original source is attributed. This distinction appears in a Nature news story:

…although the repetition of the methods section of a paper is not necessarily considered inappropriate by the scientific community, “we would expect that results, discussion and the abstract present novel results”, says Harold Garner, a bioinformatician at Virginia Polytechnic Institute and State University in Blacksburg.

There are in fact some “novel results” in the second paper, but the novelty is not as important as the basic findings, which are described nearly identically — as original in both cases.

In the normal course of pursuing a research agenda across multiple article-length publications, some repetition is justifiable and even helpful. But without a “a clear indication as to the relationship between the various duplicates or related papers,” in Roig’s words, the sandwich is not kosher.

Consider an alternative example, in an economics paper, “Booms, Busts, and Divorce,” by Judith Hellerstein and Melinda Morrill, which I wrote about recently. Their main contribution is the finding that divorce was pro-cyclical from 1976 to 2009 – that is, there was more divorce when the economy grew. The “main analysis” is this:

we combine data on state-by-year unemployment rates with state-by-year vital statistics data across the United States over the period 1976 to 2009. We assess the impact of local macroeconomic conditions on state-level divorce rates, controlling for year fixed effects, state-specific time trends, and state-specific time-invariant determinants of divorce rates.

But they then add this:

Our basic finding of pro-cyclical divorce is robust to alternative empirical specifications and is found when we allow the effect of unemployment rates on divorce to vary by the fraction of the population that is Catholic … the census region, and by time period. We also show that this finding is robust to extending our unemployment series back to 1970 … Finally, we replace the unemployment rate at the state-by-year level as our measure of macroeconomic conditions with two alternative measures: state-by-year per capita gross domestic product (GDP) and state-by-year per capita income.

Each of those different steps is a way of extending and corroborating the “main analysis,” and each required adding data from a different source and conducting a new statistical analysis. But they did not make the main finding a new discovery warranting an additional publication with the same motivation and conclusions as the first.

Look at me

To head off an inevitable question, you are welcome to look at my own publication record. There are two specific cases in which my co-author Matt Huffman and I wrote pairs of articles addressing similar questions. They were: whether variation in gender segregation across labor markets affected patterns of gender wage devaluation (here and here); and whether the presence of female managers is associated with lower levels of gender inequality (here and here).

In each case the later paper acknowledged the earlier (or, they acknowledged each other, in the first pair), and explained the differences in approach — they involved different kinds of data and statistical methods in each case as well. The results were consistent in each pair, and thus the conclusions were strengthened, the pattern corroborated. Maybe we could have combined each pair into one very dense article, but that might very well have been rejected as too long or complicated for the journals we used, and they weren’t ready at the same time. (For what it’s worth, I also checked the WCopyfind comparison between the latter pair of our articles, and found 37 words in matched strings of 6 words or more.)

What to do

There is considerable research on the problem of duplicate publication in the medical literature, as Nature reports about 0.4% of articles in MedLine are probable duplicates. I don’t know how widespread the various kinds of duplication are in the social sciences. I can’t say this is a rampant problem, or that this case is an extreme one — I don’t know.

However, based on what I have read and shown above, I’m satisfied this case is a problem the likes of which we should try to avoid.

First, however, we should consider this kind of problem in the context of the “cycle of publication overproduction,” in which the Academy finds itself. That’s the phrase from a report by Diane Harley and Sophia Krzys Acord, “Peer review in academic promotion and publishing: its meaning, locus, and future,” published by Center for Studies in Higher Education at Berkeley. They write:

…the problems we face in scholarly communication are not about publishing, per se, or the process of peer review in that system. Instead, the problems lie with the current advancement system in a multitude of higher education sectors globally that increasingly demand unrealistic publication requirements of their members in lieu of conducting thorough and context-appropriate institutional peer review, at the center of which should be a close reading and evaluation of a scholar’s overall contributions to a field in the context of each institution’s primary mission.

And one of their first recommendations is: “Encourage scholars to publish peer-reviewed work less frequently and more meaningfully.”

While we’re working on that, I have more immediate suggestions for four stages of the publication process.
  • Journal peer review. A large survey of peer reviewers found that, in the social sciences, about 88% believe ensuring acknowledgement of previous work should be one of the purposes of peer review, but only 59% believe the system is current able to do so. The editors and associations in this case — Criminology (the American Society of Criminology) and Social Problems (the Society for the Study of Social Problems) — or others have an incentive to prevent this waste of their resources. Editors and reviewers must be told when highly similar work exists, obviously, and the current ethics statement for American Sociological Review states that “Significant findings or contributions that have already appeared (or will appear) elsewhere must be clearly identified.” But in one concrete step to strengthen that, we could require that referenced work that has not been published — in press, under review, forthcoming, etc. — be accompanied by access to the papers, so the editors can look for redundancy. As it is, these papers are not available to reviewers or editors for verification.
  • Promotion committees and administrators. Counting up articles, and weighting them according to journal prestige or impact factor, is widely practiced in promotion decisions. Really reading the work is much better, but it requires more time, and more specialized skill. The Harley and Acord report has some great systemic recommendations. But in the meantime one thing I might start doing when reviewing cases is to ask myself, “Could any of these articles have been shaved into multiple salami slices?” If they could, but weren’t, let’s give credit for that decision (even if the work wasn’t in a top journal).
  • Ethics guidelines. The American Sociological Association’s Code of Ethics prohibits publishing “data or findings that they have previously published elsewhere.” However, the plagiarism provision only mentions “another person’s” work. The “self-plagiarism” (or duplication) aspect of this should be beefed up.
  • Shorter articles. Thin slices of salami are not so distasteful when eaten without a full load of bread and condiments. Sociology journals tend to have long articles compared with some other social sciences (psychology, economics, public health). If one of the two articles in this case could have been published as a brief report, with a few references and a clear emphasis on what was new, that might be reasonable. In many cases, the copying is in the theory, literature review, and methods. I know I just said we publish too much as it is, but sometimes shorter articles would help with this problem.
Addendum: Why?

Why would I write this and make it public? I certainly have nothing personal against John Hipp, whom I hardly know. But part of the privilege of having tenure protection is that the fear of offending people shouldn’t prevent me from speaking up on issues I think are important, and I am worked up about this issue. Academia has the unrivaled privilege of policing itself, and we have built a system that runs on trust, believe it or not.

I have done a few promotion reviews since I’ve been tenured, and reviewed many journal articles (more than 70 in the past 7 years). I have evaluated hundreds of job candidates and thousands of graduate student applications. Maybe it’s too much to hope that academia will abandon its addiction to bean-counting in the evaluation of productivity and merit any time soon. But we can increase our sensitivity to the flaws of that approach, and promote the honesty and transparency that are prerequisites for its functioning.

These are some other interesting sites and pages on these issues:

31 thoughts on “One case of very similar publications, with some implications and suggestions

  1. I was involved in a tenure case for a person with a small number of dense publications, each one combining several distinct analyses from different data sources. The divisional committee review initially voted to deny tenure on the grounds of the small number of publications. When it was pointed out that each of the publications was a major piece of work, several committee members said: well why didn’t you spin them off into more publications so you’d have more lines on your cv? They actually thought it was unprofessional and wrong NOT to pad your cv. No kidding. As long as that kind of thinking is abroad, you will have pressures to MPU practices. (In this case, the tenure was finally granted after appeal but it was a bruising experience for the victim).

    My theory about one way you get two similar articles like this is someone who is trying to get SOMETHING published and sends two closely-related but different papers from the same analysis to different journals in the hopes of getting at least one published. There are a LOT of AJS/ASR pairs that are not as similar as these two that were obviously produced under this kind of production model.

    Like

    1. I think we can do well by making strong cases out of those who have small numbers of excellent publications, and using them as examples. Of course, that is risky and it takes time.

      Ironically, if you get away from quantity articles, you may find yourself falling back on other metrics that are also problematic, such as journal prestige and impact factor.

      Like

      1. Philip, I think that’s a good point, too. I’ve participated in, or managed, literally hundreds of faculty evaluations through service on a college-level T&P committee, and as a department chair for 7 years, and a department member for nearly 30. The fundamental issue is one of standards. Sociology lacks consensual standards for evaluating publications. Some scholars love ambiguity, some abhor it. Some recognize that logical integrity is essential, others could care less. Some want to see clear evidence of progress, however incremental, others are happy to see wheels reinvented so long as the prose is evocative. Some enjoy armchair theorizing, others prefer to see empirical support for theoretical assertions before giving them credence through publication. General scientific standards of originality, unambiguity, logical integrity, empirical veracity, etc., would suffice, but this is a minority view in sociology.

        Without any shared, reasonably objective standards for content, I suppose the default is to count publications, use journal rankings to weigh the “importance” of particular articles, gather information on citation frequencies, etc. For books? Look at the status of the publisher, citations, reviews. Sadly, none of these measures has any necessary correlation with clarity, logic, veracity, originality or other important criteria.

        Note that *if* such standards were in place, nobody would have to publish similar ideas in multiple outlets just to get their work seen. The value inherent in a given publication would be obvious, and we’d have have much smaller haystacks to sort through in order to find it.

        Like

  2. Nice work! It would be nice if you solicit a response from Hipp. There really is no valid defense for this, but it would be very interesting to hear what he has to say.

    Like

  3. I agree that we’ve created a monster in our evaluation system where more is better, and the need for quantity has led to re-marketing articles. But the answer is to systematically change our competitive system. Every department, and college, and university cares about rankings, which require numbers of publications (and grant money). Faculty feel this pressure. And it leads to many less then totally innovative articles. Focusing the attention on faculty as opposed to the structural constraints isn’t really that useful.

    On the individual level, all sort of more reasonable pressures also exist. Well, first, faculty believe- usually accurately- they need vitas with long lists of publications to get tenure and/or promotion. But also, if you have a good idea, you want to have as many audiences read it as possible, and so writing for different audiences (by discipline, by methodological preference, by substantive area, by geographic region of the world ) is useful. It’s surely the way to make a career, to become known. And if more people read and cite one’s work, the bigger impact it has. If you believe you have something worth saying, a bigger impact is not a bad thing.

    I’m not arguing that people should publish the same articles in two places, only that there are strong pressures, and also some actual benefits from finding more then one venue for one’s research findings or new theoretical arguments.

    As I’ve mentioned to Philip privately, I often encourage former students to find different venues and different audiences for the same good idea. It’s spreading one’s work widely…and there is nothing wrong with that. And one has to get tenure to survive in the academy…..

    Like

    1. Thank you, Barbara.

      I agree about the structural constraints. That’s the public issue aspect of this, which is where there’s a chance for systemic improvement. In terms of awareness and discussion, however, I think we need to look at cases. And I decided it was OK to use this case because I also think that, despite the institutional pressures we face, we each must behave ethically in navigating them.

      On the question of running with the same good idea in different venues, I agree that is a good idea. Sometimes it takes four articles on a topic to give it the momentum of a big idea. However, it is not ethical to describe it as an original idea over and over. You can say, “I have written repeatedly on the importance of X–>Y (cite cite cite), and here I extend that idea into this new arena…”

      Like

  4. Interesting: in terms of the number of views, this is the most popular post I have ever had. That is despite the fact that I see no evidence anyone has linked to it anywhere on the web. And there are very few comments. This is a difficult issue.

    Like

    1. It’s not surprising that it has generated so much interest, is it?

      I saw it posted in Facebook threads and emailed it to a few folks myself, so I think it’s reaching people through non-blog sharing.

      Like

  5. I’ve been thinking about this. I don’t think your moral outrage is fair. I agree that I disparage cvs that are “constructed with mirrors” and MPUs and I prefer denser meater articles. But the fact is that that at least half of all jobs in the academy have strong pressures toward lines on a cv and many journals have such short page limits that complex analyses are hard to publish. Many young scholars have actually been trained to publish from a project in sets like this.

    In addition, the rules of blind review prohibit you from identifying a paper as part of your own stream of work. Often at the time a paper is being reviewed, the parallel paper is not published and exists only in your own files. Paper #2 in a series may well have been written and submitted before paper #1’s initial writing and submission.

    Like

  6. From the standpoint of journal editor, I find this really interesting. Years ago another editor told me a paper needs to be at least 50% different from another to stand alone. I believe Sage (our published) suggests it should be more like 75%. With slightly different dependent variables….not sure whether this would meet either threshold.

    As an editor, when we get a paper that looks very similar to another one already published, we do scrutinize it. But we don’t always know unless the authors refer to their work or tell us about the pieces that they’ve blinded. We have a better chance of uncovering papers that are “too” similar than reviewers would, since they don’t know the authors names (we hope).

    This is an interesting case because Criminology and Social Problems definitely constitute different audiences. And I can imagine the author receiving lots of encouraging advice to publish related work in multiple journals. I certainly publish papers that are closely related to one another myself, though I try not to be repetitive.

    Perhaps, this is most interesting since it gets us to all think about how we would define what is too similar and what is not — and apply that standard to our own work.

    Like

  7. I am currently writing a piece on colorism, and have found almost exactly the same book chapters (not articles) in more than one edited collection. There are two authors in particular who seem to have written almost the same book chapters and published them in 2 or 3 separate edited collections.

    This is a small field where there is relatively little written, probably making the repetition more obvious. I did check the books to see and there is no indication that the chapters are reprints.

    All this to point out that this also happens in edited collections (and perhaps is even more common) but the issues are the same in terms of time and resources.

    I agree that Criminology and Social Problems do address different audiences, but also wonder, with the prevalence of online searching, how much it matters where you publish in terms of people finding your work.

    I also think this is good for us to consider, as I often find myself re-using certain sections of articles – especially literature reviews and background information. (Why rewrite an explanation of IIRIRA when I already have one?). But, it is also important to be conscientious and not contribute to over-production.

    It also occurs to me that this would be obvious to people doing tenure and promotion reviews, and thus it is one’s own interest to avoid repeats.

    Like

  8. Excellent blog! The numbers game in academia has led to some disturbing practices in journal publications. Thanks for putting this out there so clearly. I will share this with others –

    Like

  9. While I understand all the points made about the university-level tenure and review system, this simply looks like plagiarism and sloppy writing to me. It’s just what we all say our students shouldn’t do–use one paper for credit in two courses. I think you’re all being too easy on Hipp (no offense, Mr. Hipp, should you read this!). As Philip points out, there are numerous, simple ways to acknowledge oneself as a source. I can’t think of any excuse for having so many identical, word-for-word passages. I think all that red should be in quotation marks and cited!

    Like

  10. I want to make two points: First, I think we need to confine this conversation to articles in peer journals. Edited books are another can of worms entirely. I know that for my last edited book, I went to top scholars and ASKED them to write about research findings they’d published as articles in a different way, for undergraduates. Edited books are for different audiences, and I think it’s fine if the work is repetitive of what was published elsewhere.

    For journal articles, I think Philip Cohen has hit the nail on the head. We have so little agreement on what IS quality, that we’ve created a system about quantity, citation rates (another way sometimes of measuring the quality of social networks), and publisher/journal prestige. That encourages individuals to publish the same idea over and over, with some innovation. It wastes trees if the journals are hard copy. But to blame the individuals who have found the way to succeed in a system that has been set up to encourage this kind of publication doesn’t get us very far. Citing oneself is of course, a good start, and the way to be ethical. It has to be done after the article is accepted though, because articles are sent out blind, so doesn’t really solve the acceptance problem itself.

    I don’t have an answer for what would….

    Like

  11. First, I’m in a department that goes for quality, not quantity, and my own cv is short, so I’m not saying I prefer the MPU strategy. I’ve built my career and helped build my department in the opposite direction.

    Second, I don’t think Barry’s gripe about standards in sociology is on point to what is going on with pressures to chop articles up into smaller bites. It is rather the non-sociology evaluators that create these pressures, in my experience. he supposedly-objective-standards fields of natural science and medicine, not to mention psychology, typically feature much longer cvs than sociologists. The only field I know that prefers shorter cvs is economics. We are a field that (at least in some of our departments) recognizes books as real publications, the exact opposite of the MPU strategy.

    Apart from cv-padding, as I and others have mentioned, the positive reasons to put out multiple similar papers are to reach different audiences and to get all one’s results out. There are only a small handful of journals with long enough page limits to accommodate a complex analysis. You simply have to chop your articles up if you want to publish everything you know. Otherwise, a lot of your results that might interest someone else sit in your office instead of the literature.

    And as I’ve said elsewhere, I still find “plagiarism” to be an excessive description of the act of copying a methods section or a big chunk of a lit review from one piece to another. How many times are people supposed to re-write the text describing their sampling procedures and measures, for heaven’s sake? And why put such a premium on re-writing your summaries of other people’s work?

    Not opposed to self-citing when you can, but as Barbara notes in her comments, you can’t do that during review.

    Like

    1. In my opinion, this case goes beyond copying “methods section or a big chunk of a lit review,” because the articles claim the same original contributions (two out of three, anyway). Maybe it confuses the issue to have this case in front of us, but I think we have to differentiate between the ethical-but-unfortunate (slicing narrowly in response to the demand for peer-reviewed article production) on the one hand, and the unethical (duplication or misleading claims to originality) on the other.

      Like

    2. Olderwoman asks, “How many times are people supposed to re-write the text describing their sampling procedures and measures, for heaven’s sake? And why put such a premium on re-writing your summaries of other people’s work?”

      **Every single time,** is my opinion, unless you fess up and cite your source–yourself and the journal that published your previous work.

      Here’s a link to an interesting article on the ethics of self-plagiarism.

      Click to access Bretag_selfPlagiarism_JAE08.pdf

      Like

  12. First, thank you for using your tenure status to bring up this topic. Second, I am a graduate student–so if you like you can discount my opinions on this now and just stop reading. But, I think one way in which to draw away from the publishing race is to give more weight to the other job components of an academic. From my personal experience, publishing has consumed everything else.

    Teaching
    Mentoring (of students or junior faculty)
    Community Outreach (working with non-profits or policy groups)
    Practical Applications of Research (in conjunction with the aforementioned)

    All of these are paid short shrift when deciding whom to hire and promote. And it is disgraceful. Any academic, at some point in their life, was touched/inspired by a teacher. And yet that is not nearly as important as producing publications–which is really what this post is about. Not writing articles or conducting research, but producing publications in the same way that a factory would produce products along a conveyor belt.

    My life would be fundamentally different if I had never met Drs. Baker and Warr. The first made me interested in sociology, the second told me about graduate school and gave me the boost of confidence I needed to apply and pursue the degree. Now, that is certainly a difficult metric to create and track (how many undergraduates did you inspire?), but measuring effective teaching, mentoring, and applying research are easier.

    That is how I believe you can create systematic change, by balancing the scales away from publishing. And it may turn out that by trying to implement that system you will get better and more innovative research. Collaboration and bringing in different perspectives can be just as good as getting together a bunch of experts. Wired wrote a great article about this in their Failure issue (http://www.wired.com/magazine/2009/12/fail_accept_defeat/all/).

    Like

  13. Really hope Hipp responds. Saw him on a job talk panel at ASC in DC. He struck me as the only honest guy on a stage full of Crim big wigs. Latest ASC Newsletter ‘The Criminologist’ includes a piece by Criminology Editors that seems to hint at this issue.

    Like

  14. Well, it worked — as of June 2015 he is now a full professor at UCI. I am not familiar with his work, and from above comments it sounds like he is a nice, honorable person, but a close look at his publications reveals signs of some somewhat cynical choices. First is the salami slicing as described in this post (coupled with *very* high rates of publication). Second is a tendency to repeatedly cite himself in a complex web of pseudo influence that seems to boost his influence without actually influencing anyone.
    https://scholar.google.com/scholar?q=john+r.+hipp+Violent+Crime%2C+Mobility+Decisions%2C+and+Neighborhood+Racial%2FEthnic+Transition&btnG=&hl=en&as_sdt=0%2C33
    These two papers under discussion were relative flops for him, with 16 and 14 citations apiece. I’m guessing he was under pressure to crank out pubs that year because he was up for a promotion. There are certainly worse sins. And it should be noted that the first version, published in Criminology, yielded citations from top criminology journals (many of which were him self citing himself, but oh well), whereas the second version yielded citations from journals in other social science areas. So there is some justification for his choice, though it would be more defensible if he had been more transparent.

    Like

Comments welcome (may be moderated)