Category Archives: Research reports

Sixteen minutes on The Tumbleweed Society

At the American Sociological Association conference, just concluded, I was on an author-meets-critics panel for Alison Pugh’s book, The Tumbleweed Society: Working and Caring in an Age of Insecurity. The other day I put up a short paper inspired by my reading, on SocArXiv (data and code here).

Here is my talk itself, in an audio file, complete with 6 seconds of music at the beginning and the end, and a lot of the ums and tangents taken out, running 16 minutes. Download it here, or listen below. And below that are the figures I reference in the talk, but you won’t really need them.

ap1

t1

job changing effect 2015 ACS-CPS

Figure 2. Average predicted probability of divorce within jobs (from logistic model in Table 2), by turnover rate. Markers are scaled according to sample size, and the linear regression line shown is weighted by sample size.

Leave a comment

Filed under Research reports

Job turnover and divorce (preconference preprint)

As I was prepared to discuss Alison Pugh’s interesting and insightful 2015 book, The Tumbleweed Society: Working and Caring in an Age of Insecurity, on an author-meets-critics panel at the American Sociological Association meetings in Montreal next week (Monday at 4:30), I talked myself into doing a quick analysis inspired by the book. (And no, I won’t hijack the panel to talk about this; I will talk about her book.)

From the publisher’s description:

In The Tumbleweed Society, Allison Pugh offers a moving exploration of sacrifice, betrayal, defiance, and resignation, as people adapt to insecurity with their own negotiations of commitment on the job and in intimate life. When people no longer expect commitment from their employers, how do they think about their own obligations? How do we raise children, put down roots in our communities, and live up to our promises at a time when flexibility and job insecurity reign?

Since to a little kid with a hammer everything looks like a nail, I asked myself yesterday, what could I do with my divorce models that might shed light on this connection between job insecurity and family commitments? The result is a very short paper, which I have posted on SocArXiv here (with supporting data and code in the associated OSF project shared here). But here it is in blog form; someday maybe I’ll elaborate it into a full paper.


Job Turnover and Divorce

Introduction

In The Tumbleweed Society, Pugh (2015) explores the relationship between commitments at work – between employers and employees – and those at home, between partners. She finds no simple relationship such that, for example, people who feel their employers owe them nothing also have low commitment to their spouses. Rather, there is a complex web of commitments, and views of what constitutes an honorable level of commitment in different arenas. This paper is inspired by that discussion, and explores one possible connection between work and couple stability, using a new combination of data from the Current Population Survey (CPS) and the American Community Survey (ACS).

In a previous paper I analyzed predictors of divorce using data from the ACS, to see whether economic indicators associated with the Great Recession predicted the odds of divorce (Cohen 2014). Because of data limitations, I used state-level indicators of unemployment and foreclosure rates to test for economic associations. Because the ACS is cross-sectional, and divorce is often associated with job instability, I could not use individual-level unemployment to predict individual-divorce, as others have done (see review in Cohen 2014). Further, the ACS does not include any information about former spouses who are no longer living with divorced individuals, so spousal unemployment was not available either.

Rather than examine the association between individual job change and divorce, this paper tests the association between turnover at the job level and divorce at the individual level. It asks, do people who work in jobs that people are likely to leave themselves more likely to divorce? The answer – which is yes – suggests possible avenues for further study of the relationship between commitments and stressors in the arenas of paid work and family stability. Job here turnover is a contextual variable. Working in a job people are likely to leave may simply mean people are exposed to involuntary job changes, which is a source of stress. However, it may also mean people work in an environment with low levels of commitment between employers and employees. This analysis can’t differentiate potential stressors versus commitment effects, or identify the nature (and direction) of commitments expressed or deployed at work or within the family. But it may provide motivation for future research.

Do job turnover and divorce run together?

Because individual (or spousal) job turnover and employment history are not available in the ACS, I use the March CPS, obtained from IPUMS (Flood et al. 2015), to calculate job turnover rates for simulated jobs, identified as detailed occupation-by-industry cells (Cohen and Huffman 2003). Although these are not jobs in the sense of specific workplaces, they provide much greater detail in work context than either occupation or industry alone, allowing differentiation, for example, between janitors in manufacturing establishments versus those in government offices, which are often substantially different contexts.

Turnover is identified by individuals whose current occupation and industry combination (as of March) does not match their primary occupation and industry for the previous calendar year, which is identified by a separate question (but using the same occupation and industry coding schemes). To reduce short-term transience, this calculation is limited to people who worked at least 20 weeks in the previous year, and more than 20 hours per week. Using the combined samples from the 2014-2016 CPS files, and restricting the sample to previous-year job cells with at least 25 respondents, I end up with 927 job cells. Note that, because the cells are national rather than workplace-specific, the size cutoff does not restrict the analysis to people working in large workplaces, but rather to common occupation-industry combinations. The job cells in the analysis include 68 percent of the eligible workers in the three years of CPS data.

For descriptive purposes, Table 1 shows the occupation and industry cells with the lowest and highest rates of job turnover from among those with sample sizes of 100 or more. Jobs with low turnover are disproportionately in the public sector and construction, and male-dominated (except schoolteachers); they are middle class and working class jobs. The high-turnover jobs, on the other hand, are in service industries (except light truck drivers) and are more female-dominated (Cohen 2013). By this simple definition, high-turnover jobs appear similar to precarious jobs as described by Kalleberg (2013) and others.

t1

Although the analysis that follows is limited to the CPS years 2014-2016 and the 2015 ACS, for context Figure 1 shows the percentage of workers who changed jobs each year, as defined above, from 1990 through 2016. Note that job changing, which is only identified for employed people, fell during the previous two recessions – especially the Great Recession that began in 2008 – perhaps because people who lost jobs would in better times have cycled into a different job instead of being unemployed. In the last two years job changing has been at relatively high levels (although note that CPS instituted a new industry coding scheme in 2014, with unknown effects on this measure). In any event, this phenomenon has not shown dramatic changes in prevalence for the past several decades.

f1

Figure 1. Percentage of workers (20+ weeks, >20 hours per week) whose jobs (occupation-by-industry cells) in March differed from their primary job in the previous calendar year.

Using the occupation industry codes from the CPS and ACS, which match for the years under study, I attach the job turnover rates from the 2014-2016 CPS data to individuals in the 2015 ACS (Ruggles et al. 2015). The analysis then uses the same modeling strategy as that used in Cohen (2014). Using the marital events variables in the ACS (Cohen 2015), I combine people, age 18-64, who are currently married (excluding those who got married in the previous year) and those who have been divorced in the previous year, and model the odds that individuals are in the divorced group. In this paper I essentially add the job turnover measure to the basic analysis in Cohen (2014, Table 3) (the covariates used here are the same except that I added one category to the education variable).

One advantage of the ACS data structure is that the occupation and industry questions refer to the “current or most recent job,” so that people who are not employed at the time of the survey still have job characteristics recorded. Although that has the downside of introducing information from jobs in the distant past for some respondents, it has the benefit of including relevant job information for people who may have just quit (or lost) jobs as part of the constellation of events involved in their divorce (for example, someone who divorces, moves to a new area, and commences a job search). If job characteristics have an effect on the odds of divorce, this information clearly is important. The ACS sample size is 581,891, 1.7 percent of whom reported having divorced in the previous year.

Results from two multivariate regression analyses are presented in Table 2. The first model predicts the turnover rate in the ACS respondents’ job, using OLS regression. It shows that, ceteris paribus, turnover rates are higher in the jobs held by women, younger people (the inflection point is at age 42), people married more recently, those married few times, those with less than a BA degree, Blacks, Asians, Hispanics, and immigrants. Thus, job turnover shows patterns largely similar to labor market advantage generally.

Most importantly for this paper, divorce is more likely for those who most recent job had a higher turnover rate, as defined here. In a reduced model (not shown), with just age and sex, the logistic coefficient on job turnover was 1.39; the addition of the covariates in Table 2 reduced that effect by 39 percent, to .84, as shown in the second model. Beyond that, job turnover is predicted by some of the same characteristics as those associated with increased odds of divorce. Divorce odds are lower after age 25, with additional years of marriage, with a BA degree, and for Whites. However, divorce is less common for Hispanics and immigrants. (The higher divorce rates for women in the ACS are not well understood; this is a self-reported measure, not a count of administrative events.)

t2

To illustrate the relationship between job turnover and the probability of divorce, Figure 2 shows the average predicted probability of divorce (from the second model in Table 2) for each of the jobs represented, with markers scaled according to sample size and a regression line similarly weighted. Below 20 percent job turnover, people are generally predicted to have divorce rates less than 2 percent per year, with predicted rates rising to 2.5 percent at high turnover rates (40 percent).

job changing effect 2015 ACS-CPS

Figure 2. Average predicted probability of divorce within jobs (from logistic model in Table 2), by turnover rate. Markers are scaled according to sample size, and the linear regression line shown is weighted by sample size.

Conclusion

People who work in jobs with high turnover rates – that is, jobs which many people are no longer working in one year later – are also more likely to divorce. A reading of this inspired by Pugh’s (2015) analysis might be that people exposed to lower levels of commitment from employers, and employees, exhibit lower levels of commitment to their own marriages. Another, noncompeting explanation would be that the stress or hardship associated with high rates of job turnover contributes to difficulties within marriage. Alternatively, the turnover variable may simply be statistically capturing other aspects of job quality that affect the risk of divorce, or there are individual qualities by which people select into both jobs with high turnover and marriages likely to end in divorce. This is a preliminary analysis, intended to raise questions and offer some avenues for analyzing these questions in the future.

References

Cohen, Philip N. 2013. “The Persistence of Workplace Gender Segregation in the US.” Sociology Compass 7 (11): 889–99. http://doi.org/10.1111/soc4.12083.

Cohen, Philip N. 2014. “Recession and Divorce in the United States, 2008–2011.” Population Research and Policy Review 33 (5): 615–28. http://doi.org/10.1007/s11113-014-9323-z.

Cohen, Philip N. 2015. “How We Really Can Study Divorce Using Just Five Questions and a Giant Sample.” Family Inequality. July 22. https://familyinequality.wordpress.com/2015/07/22/how-we-really-can-study-divorce/.

Cohen, P. N., and M. R. L. Huffman. 2003. “Individuals, Jobs, and Labor Markets: The Devaluation of Women’s Work.” American Sociological Review 68 (3): 443–63. http://doi.org/10.2307/1519732.

Kalleberg, Arne L. 2013. Good Jobs, Bad Jobs: The Rise of Polarized and Precarious Employment Systems in the United States 1970s to 2000s. New York, NY: Russell Sage Foundation.

Pugh, Allison J. 2015. The Tumbleweed Society: Working and Caring in an Age of Insecurity. New York, NY: Oxford University Press.

Steven Ruggles, Katie Genadek, Ronald Goeken, Josiah Grover, and Matthew Sobek. Integrated Public Use Microdata Series: Version 6.0 [dataset]. Minneapolis: University of Minnesota, 2015. http://doi.org/10.18128/D010.V6.0.

Sarah Flood, Miriam King, Steven Ruggles, and J. Robert Warren. Integrated Public Use Microdata Series, Current Population Survey: Version 4.0. [dataset]. Minneapolis: University of Minnesota, 2015. http://doi.org/10.18128/D030.V4.0.

2 Comments

Filed under Me @ work, Research reports

Two examples of why “Millennials” is wrong

When you make up “generation” labels for arbitrary groups based on year of birth, and start attributing personality traits, behaviors, and experiences to them as if they are an actual group, you add more noise than light to our understanding of social trends.

According to generation-guru Pew Research, “millennials” are born during the years 1981-1997. A Pew essay explaining the generations carefully explains that the divisions are arbitrary, and then proceeds to analyze data according to these divisions as if are already real. (In fact, in the one place the essay talks about differences within generations, with regard to political attitudes, it’s clear that there is no political consistency within them, as they have to differentiate between “early” and “late” members of each “generation.”)

Amazingly, despite countless media reports on these “generations,” especially millennials, in a 2015 Pew survey only 40% of people who are supposed to be millennials could pick the name out of a lineup — that is, asked, “These are some commonly used names for generations. Which of these, if any, do you consider yourself to be?”, and then given the generation names (silent, baby boom, X, millennial), 40% of people born after 1980 picked “millennial.”

“What do they know?” You’re saying. “Millennials.

Two examples

The generational labels we’re currently saddled with create false divisions between groups that aren’t really groups, and then obscure important variation within the groups that are arbitrarily lumped together. Here is just one example: the employment experience of young men around the 2009 recession.

In this figure, I’ve taken three birth cohorts: men born four years apart in 1983, 1987, and 1991 — all “millennials” by the Pew definition. Using data from the 2001-2015 American Community Surveys via IPUMS.org, the figure shows their employment rates by age, with 2009 marked for each, coming at age 26, 22, and 18 respectively.

milemp

Each group took a big hit, but their recoveries look pretty different, with the earlier cohort not recovered as of 2015, while the youngest 1991 group bounced up to surpass the employment rates of the 1987s by age 24. Timing matters. I reckon the year they hit that great recession matters more in their lives than the arbitrary lumping of them all together compared with some other older “generations.”

Next, marriage rates. Here I use the Current Population Survey and analyze the percentage of young adults married by year of birth for people ages 18-29. This is from a regression that controls for year of age and sex, so it can be interpreted as marriage rates for young adults (click to enlarge).

gens-marriage

From the beginning of the Baby Boom generation to those born through 1987 (who turned 29 in 2016, the last year of CPS data), the marriage rate fell from 57% to 21%, or 36 percentage points. Most of that change, 22 points, occurred within the Baby Boom. The marriage experience of the “early” and “late” Baby Boomers is not comparable at all. The subsequent “generations” are also marked by continuously falling marriage rates, with no clear demarcation between the groups. (There is probably some fancy math someone could do to confirm that, with regard to marriage experience, group membership by these arbitrary criteria doesn’t tell you more than any other arbitrary grouping would.)

Anyway, there are lots of fascinating and important ways that birth cohort — or other cohort identifiers — matter in people’s lives. And we could learn more about them if we looked at the data before imposing the categories.

4 Comments

Filed under Research reports

Couple fact patterns about sexuality and attitudes

Working on the second edition of my book, The Family, involves updating facts as well as rethinking their presentation, and the choice of what to include. The only way I can do that is by making figures to look at myself. Here are some things I’ve worked up recently; they might not end up in the book, but I think they’re useful anyway.

1. Attitudes on sexuality and related family matters continue to grow more accepting or tolerant, but acceptance of homosexuality is growing faster than the others – at least those measured in the repeated Gallup surveys:

gallupmoral

2. Not surprisingly, there is wide divergence in the acceptance of homosexuality across religious groups. This uses the Pew Religious Landscape Study, which includes breakouts for atheists, agnostics, and two kinds of “nones,” or unaffiliated people — those for whom religion is important and those for whom it’s not:

relhomoaccept

3. Updated same-sex behavior and attraction figures from the National Survey of Family Growth. For some reason the NSFG reports don’t include the rates of same-sex partner behavior in the previous 12 months for women anymore, so I analyzed the data myself, and found a much lower rate of last-year behavior among women than they reported before (which, when I think about it, was unreasonably high – almost as high as the ever-had-same-sex-partner rates for women). Anyway, here it is:

nsfgsamesexupdate

FYI, people who follow me on Twitter get some of this stuff quicker; people who follow on Instagram get it later or not at all.

4 Comments

Filed under Research reports

’16 and Pregnant’ and less so

3419870216_fded1624d2_z

From Flickr/CC: https://flic.kr/p/6dcJgA

Regular readers know I have objections to the framing of teen pregnancy, as a thing generally and as a problem specifically, separate from the rising age at childbearing generally (see also, or follow the teen births tag).

In this debate, one economic analysis of the effect of the popular MTV show 16 and Pregnant has played an outsized role. Melissa Kearney and Phillip Levine showed that was more decline in teen births in places where the show was popular, and attempted to establish that the relationship was causal — that the show makes people under age 20 want to have babies less. As Kearney put it in a video promoting the study: “the portrayal of teen pregnancy, and teen childbearing, is something they took as a cautionary tale.” (The paper also showed spikes in Twitter and Google activity related to birth control after the show aired.)

This was very big news for the marriage promotion people, because it was taken as evidence that cultural intervention “works” to affect family behavior — which really matters because so far they’ve spent $1 billion+ in welfare money on promoting marriage, with no effect (none), and they want more money.

The 16 and Pregnant paper has been cited to support statements such as:

  • Brad Wilcox: “Campaigns against smoking and teenage and unintended pregnancy have demonstrated that sustained efforts to change behavior can work.”
  • Washington Post: “By working with Hollywood to develop smart story lines on popular shows such as MTV’s ’16 and Pregnant’ and using innovative videos and social media to change norms, the [National Campaign to Prevent Teen and Unplanned Pregnancy] has helped teen pregnancy rates drop by nearly 60 percent since 1991.”
  • Boston Globe: “As evidence of his optimism, [Brad] Wilcox points to teen pregnancy, which has dropped by more than 50 percent since the early 1990s. ‘Most people assumed you couldn’t do much around something related to sex and pregnancy and parenthood,’ he said. ‘Then a consensus emerged across right and left, and that consensus was supported by public policy and social norms. . . . We were able to move the dial.’ A 2014 paper found that the popular MTV reality show ’16 and Pregnant’ alone was responsible for a 5.7 percent decline in teen pregnancy in the 18 months after its debut.”

I think a higher age at first birth is better for women overall, health permitting, but I don’t support that as a policy goal in the U.S. now, although I expect it would be an outcome of things I do support, like better health, education, and job opportunities for people of color and people who are poor.

Anyway, this is all just preamble to a new debate from a reanalysis and critique of the 16 and Pregnant paper. I haven’t worked through it enough to reach my own conclusions, and I’d like to hear from others who have. So I’m just sharing the links in sequence.

The initial paper, posted as a (non-peer reviewed) NBER Working Paper in 2014:

Media Influences on Social Outcomes: The Impact of MTV’s 16 and Pregnant on Teen Childbearing, by Melissa S. Kearney, Phillip B. Levine

This paper explores how specific media images affect adolescent attitudes and outcomes. The specific context examined is the widely viewed MTV franchise, 16 and Pregnant, a series of reality TV shows including the Teen Mom sequels, which follow the lives of pregnant teenagers during the end of their pregnancy and early days of motherhood. We investigate whether the show influenced teens’ interest in contraceptive use or abortion, and whether it ultimately altered teen childbearing outcomes. We use data from Google Trends and Twitter to document changes in searches and tweets resulting from the show, Nielsen ratings data to capture geographic variation in viewership, and Vital Statistics birth data to measure changes in teen birth rates. We find that 16 and Pregnant led to more searches and tweets regarding birth control and abortion, and ultimately led to a 5.7 percent reduction in teen births in the 18 months following its introduction. This accounts for around one-third of the overall decline in teen births in the United States during that period.

A revised version, with the same title but slightly different results, was then published in the top-ranked American Economic Review, which is peer-reviewed:

This paper explores the impact of the introduction of the widely viewed MTV reality show 16 and Pregnant on teen childbearing. Our main analysis relates geographic variation in changes in teen childbearing rates to viewership of the show. We implement an instrumental variables (IV) strategy using local area MTV ratings data from a pre-period to predict local area 16 and Pregnant ratings. The results imply that this show led to a 4.3 percent reduction in teen births. An examination of Google Trends and Twitter data suggest that the show led to increased interest in contraceptive use and abortion.

Then last month David A. Jaeger, Theodore J. Joyce, and Robert Kaestner posted a critique on the Institute for the Study of Labor working paper series, which is not peer-reviewed:

Does Reality TV Induce Real Effects? On the Questionable Association Between 16 and Pregnant and Teenage Childbearing

We reassess recent and widely reported evidence that the MTV program 16 and Pregnant played a major role in reducing teen birth rates in the U.S. since it began broadcasting in 2009 (Kearney and Levine, American Economic Review 2015). We find Kearney and Levine’s identification strategy to be problematic. Through a series of placebo and other tests, we show that the exclusion restriction of their instrumental variables approach is not valid and find that the assumption of common trends in birth rates between low and high MTV-watching areas is not met. We also reassess Kearney and Levine’s evidence from social media and show that it is fragile and highly sensitive to the choice of included periods and to the use of weights. We conclude that Kearney and Levine’s results are uninformative about the effect of 16 and Pregnant on teen birth rates.

And now Kearney and Levine have posted their response on the same site:

Does Reality TV Induce Real Effects? A Response to Jaeger, Joyce, and Kaestner (2016)

This paper presents a response to Jaeger, Joyce, and Kaestner’s (JJK) recent critique (IZA Discussion Paper No. 10317) of our 2015 paper “Media Influences on Social Outcomes: The Impact of MTV’s 16 and Pregnant on Teen Childbearing.” In terms of replication, those authors are able to confirm every result in our paper. In terms of reassessment, the substance of their critique rests on the claim that the parallel trends assumption, necessary to attribute causation to our findings, is not satisfied. We present three main responses: (1) there is no evidence of a parallel trends assumption violation during our sample window of 2005 through 2010; (2) the finding of a false placebo test result during one particular earlier window of time does not invalidate the finding of a discrete break in trend at the time of the show’s introduction; (3) the results of our analysis are robust to virtually all alternative econometric specifications and sample windows that JJK consider. We conclude that this critique does not pose a serious threat to the interpretation of our 2015 findings. We maintain the position that our earlier paper is informative about the causal effect of 16 and Pregnant on teen birth rates.

So?

There are interesting methodological questions here. It’s hard to identify the effects of interventions that are swimming with the tide of change. In fact, the creation of the show, the show’s popularity, the campaign to end teen pregnancy, and the rising age at first birth may all be outcomes of the same general historical trend. So I’m not that invested in the answer to this question, though I am very interested.

There are also questions about the publication process, which I am very invested in. That’s why I work to promote a working paper culture among sociologists (through the SocArXiv project). The original paper was posted on a working paper site without peer review, but NBER is for economists who already are somebody, so that’s a kind of indirect screening. Then it was accepted in a top peer-reviewed journal (somewhat revised), but that was after it had received major attention and accolades, including a New York Times feature before the working paper was even released and a column devoted to it by Nicholas Kristof.

So is this a success story of working paper culture gone right — driving attention to good work faster, and then also drawing the benefits of peer review through the traditional publication process? (And now continuing with open debate on non-gated sites). Or is it a case of political hype driving attention inside and outside of the academy — the kind of thing that scares researchers and makes them want to retreat behind the slower, more process-laden research flow which they hope will protect them from exposure to embarrassment and protect the public from manipulation by the credulous news media. I think the process was okay even if we do conclude the paper wasn’t all it was made out to be. There were other reputational systems at work — faculty status, NBER membership, New York Times editors and sources — that may be as reliable as traditional peer review, which itself produces plenty of errors.

So, it’s an interesting situation — research methods, research implications, and research process.

2 Comments

Filed under Research reports

How broken is our system (hit me with that figure again edition)

Why do sociologists publish in academic journals? Sometimes it seems improbable that the main goal is sharing information and advancing scientific knowledge. Today’s example of our broken system, brought to my attention by Neal Caren, concerns three papers by Eran Shor, Arnout van de Rijt, Charles Ward, Aharon Blank-Gomel, and Steven Skiena (Shor et al).

May 13, 2016 update: Eran Shor has sent me a response, which I posted here.

In a paywalled 2013 paper in Journalism Studies, the team used an analysis of names appearing in newspapers to report the gender composition of people mentioned. They analyzed the New York Times back to 1880, and then a larger sample of 13 newspapers from 1982 through 2005. Here’s one of their figures:

shor1

The 2013 paper was a descriptive analysis, establishing that men are mentioned more than women over time.

In a paywalled 2014 article in Social Science Quarterly (SSQ) the team followed up. Except for a string-cite mention in the methods section, the second paper makes no reference to the first, giving no indication that the two are part of a developing project. They use this figure to motivate the analysis in the second paper, with no acknowledgment that it also appeared in the first:

shor2

Shor et al. 2014 asked,

How can we account for the consistency of these disparities? One possible factor that may explain at least some of these consistent gaps may be the political agendas and choices of specific newspapers.

Their hypothesis was:

H1: Newspapers that are typically classified as more liberal will exhibit a higher rate of female-subjects’ coverage than newspapers typically classified as conservative.

After analyzing the data, they concluded:

The proposition that liberal newspapers will be more likely to cover female subjects was not supported by our findings. In fact, we found a weak to moderate relationship between the two variables, but this relationship is in the opposite direction: Newspapers recognized (or ranked) as more “conservative” were more likely to cover female subjects than their more “liberal” counterparts, especially in articles reporting on sports.

They offered several caveats about this finding, including that the measure of political slant used is “somewhat crude.”

Clearly, much more work to be done. The next piece of the project was a 2015 article in American Sociological Review (which, as the featured article of the issue, was not paywalled by Sage). Again, without mentioning that the figure has been previously published, and with one passing reference to each of the previous papers, they motivated the analysis with the figure:

shor3

Besides not getting the figure in color, ASR readers for some reason also don’t get 1982 in the data. (The paper makes no mention of the difference in period covered, which makes sense because it never mentions any connection to the analysis in the previous paper). The ASR paper asks of this figure, “How can we account for the persistence of this disparity?”

By now I bet you’re thinking, “One way to account for this disparity is to consider the effects of political slant.” Good idea. In fact, in the depiction of the ASR paper, the rationale for this question has hardly changed at all since the SSQ paper. Here are the two passages justifying the question.

From SSQ:

Former anecdotal evidence on the relationship between newspapers’ political slant and their rate of female-subjects coverage has been inconclusive. … [describing studies by Potter (1985) and Adkins Covert and Wasburn (2007)]…

Notwithstanding these anecdotal findings, there are a number of reasons to believe that more conservative outlets would be less likely to cover female subjects and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s rights issues in a relatively negative light (Baker Beck, 1998; Brescoll and LaFrance, 2004). Therefore, they may be less likely to devote coverage to these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors […]. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally (that is, conservatively) considered to be more important or interesting, such as politics, business, and sports, and less likely to report on issues such as social welfare, education, or fashion, where according to research women have a stronger presence (Holland, 1998; Ross, 2007, 2009; Ross and Carter, 2011).

From ASR:

Some work suggests that conservative newspapers may cover women less (Potter 1985), but other studies report the opposite tendency (Adkins Covert and Wasburn 2007; Shor et al. 2014a).

Notwithstanding these inconclusive findings, there are several reasons to believe that more conservative outlets will be less likely to cover women and women’s issues compared with their more liberal counterparts. First, conservative media often view feminism and women’s issues in a relatively negative light (Baker Beck 1998; Brescoll and LaFrance 2004), making them potentially less likely to cover these issues. Second, and related to the first point, conservative media may also be less likely to employ female reporters and female editors. Finally, conservative papers may be more likely to cover “hard” topics that are traditionally considered more important or interesting, such as politics, business, and sports, rather than reporting on issues such as social welfare, education, or fashion, where women have a stronger presence.

Except for a passing mention among the “other studies,” there is no connection to the previous analysis. The ASR hypothesis is:

Conservative newspapers will dedicate a smaller portion of their coverage to females.

On this question in the ASR paper, they conclude:

our analysis shows no significant relationship between newspaper coverage patterns and … a newspaper’s political tendencies.

It looks to me like the SSQ and ASR they used the same data to test the same hypothesis (in addition to whatever else is new in the third paper). Given that they are using the same data, how they got from a “weak to moderate relationship” to “no significant relationship” seems important. Should we no longer rely on the previous analysis? Or do these two papers just go into the giant heap of studies in which “some say this, some say that”? What kind of way is this to figure out what’s going on?

Still love your system?

It’s fine to report the same findings in different venues and formats. It’s fine, that is, as long as it’s clear they’re not original in the subsequent tellings. (I personally have been known to regale my students, and family members, with the same stories over and over, but I try to remember to say, “Stop me if I already told you this one” first.)

I’m not judging Shor et al. for any particular violation of specific rules or norms. And I’m not judging the quality of the work overall. But I will just make the obvious observation that this way of presenting ongoing research is wasteful of resources, misleading to readers, and hinders the development of research.

  • Wasteful because reviewers, editors, and publishers, are essentially duplicating their efforts to try to figure out what is actually to be learned from these overlapping papers — and then to repackage and sell the duplicative information as new.
  • Misleading to readers because we now have “many studies” that show the same thing (or different things), without the clear acknowledgment that they use the same data.
  • And hindering research because of the wasteful delays and duplicative expenses involved in publishing research that should be clearly presented in cumulative, transparent fashion, in a timely way — which is what we need to move science forward.

Open science

When making (or hearing) arguments against open science as impractical or unreasonable, just weigh the wastefulness, misleadingness, and obstacles to science so prevalent in the current system against whatever advantages you think it holds. We can’t have a reasonable conversation about our publishing system based on the presumption that it’s working well now.

In an open science system researchers publish their work openly (and free) with open links between different parts of the project. For example, researchers might publish one good justification for a hypothesis, with several separate analyses testing it, making clear what’s different in each test. Reviewers and readers could see the whole series. Other researchers would have access to the materials necessary for replication and extension of the work. People are judged for hiring and promotion according to the actual quality and quantity of their work and the contribution it makes to advancing knowledge, rather than through arbitrary counts of “publications” in private, paywalled journals. (The non-profit Center for Open Science is building a system like this now, and offers a free Open Science Framework, “A scholarly commons to connect the entire research cycle.”)

There are challenges to building this new system, of course, but any assessment of those challenges needs to be clear-eyed about the ridiculousness of the system we’re working under now.

Previous related posts have covered very similar publications, the opposition to open access, journal self-citation practices, and one publication’s saga.

12 Comments

Filed under Research reports