Category Archives: Research reports

Who’s happy in marriage? (Not just rich, White, religious men, but kind of)

I previously said there was a “bonafide trend back toward happiness” within marriage, for the years 2006 to 2012. This was based on the General Social Survey trend going back 1973, with married people responding to the question, “Taking all things together, how would you describe your marriage?”

Since then, the bonafide trend has lost its pop. Here’s my updated figure:

hapmar16

I repeated this analysis controlling for age, race/ethnicity, and education, and year specified in quadratic form. This shows happiness falling to a trough at 2004 and then starting to trend back. But given the last two points, confidence in that rebound is weak. Still a solid majority are happy with their marriages.

Who’s happy?

But who are those happy in marriage people? Combining the last three surveys, 2012, 2014, and 2016, this is what we get (effect of age and non-effect of education not shown). Note the y-axis starts at 50%.

hapmar16c

So to be happy in marriage, my expert opinion is you should become male and White, see yourself as upper class, go to church all the time, and have extreme political views. And if you’re not all those things, don’t let the marriage promoters tell you what your marriage is going to be like.

Note: I previously analyzed the political views thing before, so this is an update to that. On trends and determinants of social class identification, see this post.)


Here’s my Stata code, written to run on the full GSS through 2016 data. Play along at home!

set maxvar 10000
use "GSS7216_R1a.dta", clear
gen since73 = year-1973
gen rwgt = round(wtssall)
keep if year >1972
gen verhap=0
replace verhap=1 if hapmar==1
logit verhap i.sex c.age##c.age i.degree i.race c.since73##c.since73 [weight=rwgt]
margins, at(since73=(0(1)43))
recode attend (1/3=1) (4/6=2) (7/8=3), gen(attendcat)
logit verhap i.sex c.age##c.age i.degree i.race i.class i.attendcat i.polviews if year>2010 [weight=rwgt]
margins sex race class attendcat polviews if year>2010

 

6 Comments

Filed under Research reports

On artificially intelligent gaydar

A paper by Yilun Wang and Michal Kosinski reports being able to identify gay and lesbian people from photographs using “deep neural networks,” which means computer software.

I’m not going to describe it in detail here, but the gist of it is they picked a large sample of people from a dating website who said they were looking for same-sex partners, and an equal number that were looking for different-sex partners, and trained their computers to learn the facial features that could distinguish the two groups (including facial structure measurements as well as grooming things like hairline and facial hair). For a deep dive on the context of this kind of research and its implications, and more on the researchers and the controversy, please read this post by Greggor Mattson first. These notes will be most useful after you’ve read that.

I also reviewed a gaydar paper five years ago, and some of the same critiques apply.

This figure from the paper gives you an idea:

gd4

These notes are how I would start my peer review, if I was peer reviewing this paper (which is already accepted and forthcoming in the Journal of Personality and Social Psychology — so much for peer review [just kidding it’s just a very flawed system]).

The gay samples here are “very” gay, in the sense of being out and looking for same-sex partners. This does not mean that they are “very” gay in any biological, or born-this-way sense. If you could quantitatively score people on the amount of their gayness (say on some kind of scale…), outness and same-sex attraction might be correlated, but they are different things. The correlation here is assumed, and assumed to be strong, but this is not demonstrated. (It’s funny that they think they address the problem of the sample by comparing the results with a sample from Facebook of people who like pages such as “I love being gay” and “Manhunt.”)

Another way of saying this is that the dependent variable is poor defined, and then conclusions from studying it are generalized beyond the bounds of the research. So I don’t agree that the results:

provide strong support provide strong support for the PHT [prenatal hormone theory], which argues that same-gender sexual orientation stems from the underexposure of male fetuses and overexposure of female fetuses to prenatal androgens responsible for the sexual differentiation of faces, preferences, and behavior.

If it were my study I might say the results are “consistent” with PHT theory, but it would be better to say, “not inconsistent” with the theory. (There is no data about hormones in the paper, obviously.)

The authors give too much weight to things their results can’t say anything about. For example, gay men in the sample are less likely to have beards. They write:

nature and nurture are likely to be as intertwined as in many other contexts. For example, it is unclear whether gay men were less likely to wear a beard because of nature (sparser facial hair) or nurture (fashion). If it is, in fact, fashion (nurture), to what extent is such a norm driven by the tendency of gay men to have sparser facial hair (nature)? Alternatively, could sparser facial hair (nature) stem from potential differences in diet, lifestyle, or environment (nurture)?

The statement is based on the faulty premise that they are “nature and nurture are likely to be as intertwined.” They have no evidence of this intertwining. They could just as well have said “it’s possible nature and nurture are intertwined,” or, with as much evidence, “in the unlikely event nature and nurture are intertwined.” So they loaded the discussion with the presumption of balance between nature and nurture, and then go on to speculate about sparse facial hair, for which they also have no evidence. (This happens to be the same way Charles Murray talks about race and IQ: there must be some intertwining between genetics and social forces, but we can’t say how much; now let’s talk about genetics because it’s definitely in there.)

Aside from the flaws in the study, the accuracy rate reported is easily misunderstood, or misrepresented. To choose one example, the Independent wrote:

According to its authors, who say they were “really disturbed” by their findings, the accuracy of an AI system can reach 91 per cent for homosexual men and 83 per cent for homosexual women.

The authors say this, which is important but of course overlooked in much of the news reporting:

The AUC = .91 does not imply that 91% of gay men in a given population can be identified, or that the classification results are correct 91% of the time. The performance of the classifier depends on the desired trade-off between precision (e.g., the fraction of gay people among those classified as gay) and recall (e.g., the fraction of gay people in the population correctly identified as gay). Aiming for high precision reduces recall, and vice versa.

They go on to give a technical, and I believe misleading example. People should understand that the computer was always picking between two people, one of whom was identified as gay and the other not. It had a high percentage chance of getting that choice right. That’s not saying, “this person is gay”; it’s saying, “if I had to choose which one of these two people is gay, knowing that one is, I’d choose this one.” What they don’t answer is this: Given 100 random people, 7 of whom are gay, how many would the model correctly identify yes or no? That is the real life question most people probably think the study is answering.

As technology writer Hal Hodson pointed out on Twitter, if someone wanted to scan a crowd and identify a small number individuals who were likely to be gay (and ignoring many other people in the crowd who are also gay), this might work (with some false positives, of course).

gd1

Probably someone who wanted to do that would be up to no good, like an oppressive government or Amazon, and they would have better ways of finding gay people (like at pride parades, or looking on Facebook, or dating sites, or Amazon shopping history directly — which they already do of course). Such a bad actor could also train people to identify gay people based on many more social cues; the researchers here compare their computer algorithm to the accuracy of untrained people, and find their method better, but again that’s not a useful real-world comparison.

Aside: They make the weird but rarely-necessary-to-justify decision to limit the sample to White participants (and also offer no justification for using the pseudoscientific term “Caucasian,” which you should never ever use because it doesn’t mean anything). Why couldn’t respondents (or software) look at a Black person and a White person and ask, “Which one is gay?” Any artificial increase in the homogeneity of the sample will increase the likelihood of finding patterns associated with sexual orientation, and misleadingly increase the reported accuracy of the method used. And of course statements like this should not be permitted: “We believe, however, that our results will likely generalize beyond the population studied here.”

Some readers may be disappointed to learn I don’t think the following is an unethical research question: Given a sample of people on a dating site, some of whom are looking for same-sex partners and some of whom are looking for different-sex partners, can we use computers to predict which is which? To the extent they did that, I think it’s OK. That’s not what they said they were doing, though, and that’s a problem.

I don’t know the individuals involved, their motivations, or their business ties. But if I were a company or government in the business of doing unethical things with data and tools like this, I would probably like to hire these researchers, and this paper would be good advertising for their services. It would be nice if they pledged not to contribute personally to such work, especially any efforts to identify people’s sexual orientation without their consent.

11 Comments

Filed under Research reports

Women’s Equality Day earnings data stuff and suffrage note

Tomorrow is Women’s Equality Day, which commemorates the day, in 1920, when U.S. women were granted the right to vote. (Asterisk: White women.)

One historical story

Congress finally passed a Constitutional amendment for women’s suffrage in 1918, after decades of activism. The suffrage movement in the end successfully made a few convincing arguments – and one clarification. The most important may have been that White women had proved their patriotism during the war, and so they finally deserved the vote. I wrote in 1996:

“No one thing connected with the war is of more importance at this time than meeting the reasonable demand of millions of patriotic and Christian women of the Nation that the amendment for woman suffrage be submitted to the states,” declared Representative James Cantrill. And, he added, “Right, justice, liberty and democracy have always been, and will always be, safe in the tender care of American womanhood.”

And you know what he meant by “American womanhood” (an image the mainstream suffrage movement encouraged to various degrees over the years):

American_progress

American Progress, by John Gast (1872)

The important clarification was that women’s suffrage would absolutely not hurt White supremacy in the South. You know how it is when you just need that Southern vote. I went on:

If reluctant congressmen would only believe in the contribution of white women that was waiting to be made, suffrage advocates explained, the political math was irresistible. “There are more white women of voting age in the South to-day than there are negro men and women together,” [Congress’s only woman, Jeannette] Rankin said. Representative Scott Ferris assured them that poll taxes and literacy tests would remain untouched, so that “for every negro woman so enfranchised there will be hundreds and thousands of intelligent white women enfranchised” (Congressional Record 1918, 779). And Representative Thomas Blanton proclaimed, “So far as State rights are concerned, if this amendment sought to take away from any State the right of fixing the qualifications of its voters, I would be against it first, last, and all the time, but such it does not.” Although states should be allowed to set qualifications for voting, he believed, they could not do so at the expense of undermining true republicanism, and, “if you deny the 14,000,000 white women of this country the right to vote, you are interfering with a republican form of government [Applause]” (786). That day, the House passed the amendment with the required two-thirds vote.

Anyway, rights are rights, America is America, history is history (ha ha).

Some pay gap numbers

Back to nowadays. Today’s numbers come from some analysis of the gender earnings gap I did to support the Council on Contemporary Families brief for Women’s Equality Day. One big story is women’s rising education levels, especially BA completion.

In the active labor force as often described (age 25-54, working at least 20 hours per week and 26 weeks in the previous year), women surpassed men in BA completion in 2002:

wed1

That’s very good for women with regard to the earnings gap, because at every level of education men earn more than women. Women’s full-time full-year earnings are between 70% and 80% of men’s at all education levels except the highest, where they diverge: men who are doctors and lawyers earn much more than women, while women PhDs are doing relatively well. Here’s the 2015 breakdown by education:

wed2

With the education trend and differentials in mind, consider these multivariate model results. Going back to the sample of 25-54-year-old people working at least half-time and half the year, here are two results. The first line, in blue, shows the gender earnings ratio when only age is controlled. It shows women gaining on men from 2000 to 2016, from 77% to 83%. This is not much progress for 25 years, but it’s the slow pace we’ve come to expect during that time. The other line shows result from a more complete model, which adds controls for education, race/ethnicity, marital status, and presence of children; it shows even less progress.

wed3

In the full model (orange line) the relative gains for women are not as great. (Note I don’t include occupation in the “full” model although that’s very important; it’s just also an outcome of gender so I let it be in the gender variable for descriptive purposes.)

In the old days, when women had less education than men, controlling for education shrank the gap; now it appears the opposite is true. I haven’t done the whole decomposition to confirm this, but here’s another way to look at it. The next figure shows the same models, but in two separate samples, with and without BA degrees (and no control for education). The figure shows little progress within education groups. This implies it’s the increase in education for women that is driving the progress seen in the previous figure.

wed4

In conclusion: there is a substantial gender earnings gap at every level of education. The limited progress toward equality we’ve seen in the past 25 years may be driven by increases in women’s education.

There is a lot of other research on this — especially about segregation, which I didn’t include here — and a lot more to be done.


This is a little analysis, but if you’d like to do more, or see how I did what I’ve shown here, I posted the Stata code, data from IPUMS.org, codebook, and spreadsheet file on the Open Science Framework site here. You can use any of it for whatever you like, with a citation that looks like this one the OSF generates:

Cohen, P. N. (2017, August 25). Gender wage gap analysis, 1992-2016. Retrieved from osf.io/mhp3z

5 Comments

Filed under Research reports

Sixteen minutes on The Tumbleweed Society

At the American Sociological Association conference, just concluded, I was on an author-meets-critics panel for Alison Pugh’s book, The Tumbleweed Society: Working and Caring in an Age of Insecurity. The other day I put up a short paper inspired by my reading, on SocArXiv (data and code here).

Here is my talk itself, in an audio file, complete with 6 seconds of music at the beginning and the end, and a lot of the ums and tangents taken out, running 16 minutes. Download it here, or listen below. And below that are the figures I reference in the talk, but you won’t really need them.

ap1

t1

job changing effect 2015 ACS-CPS

Figure 2. Average predicted probability of divorce within jobs (from logistic model in Table 2), by turnover rate. Markers are scaled according to sample size, and the linear regression line shown is weighted by sample size.

Leave a comment

Filed under Research reports

Job turnover and divorce (preconference preprint)

As I was prepared to discuss Alison Pugh’s interesting and insightful 2015 book, The Tumbleweed Society: Working and Caring in an Age of Insecurity, on an author-meets-critics panel at the American Sociological Association meetings in Montreal next week (Monday at 4:30), I talked myself into doing a quick analysis inspired by the book. (And no, I won’t hijack the panel to talk about this; I will talk about her book.)

From the publisher’s description:

In The Tumbleweed Society, Allison Pugh offers a moving exploration of sacrifice, betrayal, defiance, and resignation, as people adapt to insecurity with their own negotiations of commitment on the job and in intimate life. When people no longer expect commitment from their employers, how do they think about their own obligations? How do we raise children, put down roots in our communities, and live up to our promises at a time when flexibility and job insecurity reign?

Since to a little kid with a hammer everything looks like a nail, I asked myself yesterday, what could I do with my divorce models that might shed light on this connection between job insecurity and family commitments? The result is a very short paper, which I have posted on SocArXiv here (with supporting data and code in the associated OSF project shared here). But here it is in blog form; someday maybe I’ll elaborate it into a full paper.


Job Turnover and Divorce

Introduction

In The Tumbleweed Society, Pugh (2015) explores the relationship between commitments at work – between employers and employees – and those at home, between partners. She finds no simple relationship such that, for example, people who feel their employers owe them nothing also have low commitment to their spouses. Rather, there is a complex web of commitments, and views of what constitutes an honorable level of commitment in different arenas. This paper is inspired by that discussion, and explores one possible connection between work and couple stability, using a new combination of data from the Current Population Survey (CPS) and the American Community Survey (ACS).

In a previous paper I analyzed predictors of divorce using data from the ACS, to see whether economic indicators associated with the Great Recession predicted the odds of divorce (Cohen 2014). Because of data limitations, I used state-level indicators of unemployment and foreclosure rates to test for economic associations. Because the ACS is cross-sectional, and divorce is often associated with job instability, I could not use individual-level unemployment to predict individual-divorce, as others have done (see review in Cohen 2014). Further, the ACS does not include any information about former spouses who are no longer living with divorced individuals, so spousal unemployment was not available either.

Rather than examine the association between individual job change and divorce, this paper tests the association between turnover at the job level and divorce at the individual level. It asks, do people who work in jobs that people are likely to leave themselves more likely to divorce? The answer – which is yes – suggests possible avenues for further study of the relationship between commitments and stressors in the arenas of paid work and family stability. Job here turnover is a contextual variable. Working in a job people are likely to leave may simply mean people are exposed to involuntary job changes, which is a source of stress. However, it may also mean people work in an environment with low levels of commitment between employers and employees. This analysis can’t differentiate potential stressors versus commitment effects, or identify the nature (and direction) of commitments expressed or deployed at work or within the family. But it may provide motivation for future research.

Do job turnover and divorce run together?

Because individual (or spousal) job turnover and employment history are not available in the ACS, I use the March CPS, obtained from IPUMS (Flood et al. 2015), to calculate job turnover rates for simulated jobs, identified as detailed occupation-by-industry cells (Cohen and Huffman 2003). Although these are not jobs in the sense of specific workplaces, they provide much greater detail in work context than either occupation or industry alone, allowing differentiation, for example, between janitors in manufacturing establishments versus those in government offices, which are often substantially different contexts.

Turnover is identified by individuals whose current occupation and industry combination (as of March) does not match their primary occupation and industry for the previous calendar year, which is identified by a separate question (but using the same occupation and industry coding schemes). To reduce short-term transience, this calculation is limited to people who worked at least 20 weeks in the previous year, and more than 20 hours per week. Using the combined samples from the 2014-2016 CPS files, and restricting the sample to previous-year job cells with at least 25 respondents, I end up with 927 job cells. Note that, because the cells are national rather than workplace-specific, the size cutoff does not restrict the analysis to people working in large workplaces, but rather to common occupation-industry combinations. The job cells in the analysis include 68 percent of the eligible workers in the three years of CPS data.

For descriptive purposes, Table 1 shows the occupation and industry cells with the lowest and highest rates of job turnover from among those with sample sizes of 100 or more. Jobs with low turnover are disproportionately in the public sector and construction, and male-dominated (except schoolteachers); they are middle class and working class jobs. The high-turnover jobs, on the other hand, are in service industries (except light truck drivers) and are more female-dominated (Cohen 2013). By this simple definition, high-turnover jobs appear similar to precarious jobs as described by Kalleberg (2013) and others.

t1

Although the analysis that follows is limited to the CPS years 2014-2016 and the 2015 ACS, for context Figure 1 shows the percentage of workers who changed jobs each year, as defined above, from 1990 through 2016. Note that job changing, which is only identified for employed people, fell during the previous two recessions – especially the Great Recession that began in 2008 – perhaps because people who lost jobs would in better times have cycled into a different job instead of being unemployed. In the last two years job changing has been at relatively high levels (although note that CPS instituted a new industry coding scheme in 2014, with unknown effects on this measure). In any event, this phenomenon has not shown dramatic changes in prevalence for the past several decades.

f1

Figure 1. Percentage of workers (20+ weeks, >20 hours per week) whose jobs (occupation-by-industry cells) in March differed from their primary job in the previous calendar year.

Using the occupation industry codes from the CPS and ACS, which match for the years under study, I attach the job turnover rates from the 2014-2016 CPS data to individuals in the 2015 ACS (Ruggles et al. 2015). The analysis then uses the same modeling strategy as that used in Cohen (2014). Using the marital events variables in the ACS (Cohen 2015), I combine people, age 18-64, who are currently married (excluding those who got married in the previous year) and those who have been divorced in the previous year, and model the odds that individuals are in the divorced group. In this paper I essentially add the job turnover measure to the basic analysis in Cohen (2014, Table 3) (the covariates used here are the same except that I added one category to the education variable).

One advantage of the ACS data structure is that the occupation and industry questions refer to the “current or most recent job,” so that people who are not employed at the time of the survey still have job characteristics recorded. Although that has the downside of introducing information from jobs in the distant past for some respondents, it has the benefit of including relevant job information for people who may have just quit (or lost) jobs as part of the constellation of events involved in their divorce (for example, someone who divorces, moves to a new area, and commences a job search). If job characteristics have an effect on the odds of divorce, this information clearly is important. The ACS sample size is 581,891, 1.7 percent of whom reported having divorced in the previous year.

Results from two multivariate regression analyses are presented in Table 2. The first model predicts the turnover rate in the ACS respondents’ job, using OLS regression. It shows that, ceteris paribus, turnover rates are higher in the jobs held by women, younger people (the inflection point is at age 42), people married more recently, those married few times, those with less than a BA degree, Blacks, Asians, Hispanics, and immigrants. Thus, job turnover shows patterns largely similar to labor market advantage generally.

Most importantly for this paper, divorce is more likely for those who most recent job had a higher turnover rate, as defined here. In a reduced model (not shown), with just age and sex, the logistic coefficient on job turnover was 1.39; the addition of the covariates in Table 2 reduced that effect by 39 percent, to .84, as shown in the second model. Beyond that, job turnover is predicted by some of the same characteristics as those associated with increased odds of divorce. Divorce odds are lower after age 25, with additional years of marriage, with a BA degree, and for Whites. However, divorce is less common for Hispanics and immigrants. (The higher divorce rates for women in the ACS are not well understood; this is a self-reported measure, not a count of administrative events.)

t2

To illustrate the relationship between job turnover and the probability of divorce, Figure 2 shows the average predicted probability of divorce (from the second model in Table 2) for each of the jobs represented, with markers scaled according to sample size and a regression line similarly weighted. Below 20 percent job turnover, people are generally predicted to have divorce rates less than 2 percent per year, with predicted rates rising to 2.5 percent at high turnover rates (40 percent).

job changing effect 2015 ACS-CPS

Figure 2. Average predicted probability of divorce within jobs (from logistic model in Table 2), by turnover rate. Markers are scaled according to sample size, and the linear regression line shown is weighted by sample size.

Conclusion

People who work in jobs with high turnover rates – that is, jobs which many people are no longer working in one year later – are also more likely to divorce. A reading of this inspired by Pugh’s (2015) analysis might be that people exposed to lower levels of commitment from employers, and employees, exhibit lower levels of commitment to their own marriages. Another, noncompeting explanation would be that the stress or hardship associated with high rates of job turnover contributes to difficulties within marriage. Alternatively, the turnover variable may simply be statistically capturing other aspects of job quality that affect the risk of divorce, or there are individual qualities by which people select into both jobs with high turnover and marriages likely to end in divorce. This is a preliminary analysis, intended to raise questions and offer some avenues for analyzing these questions in the future.

References

Cohen, Philip N. 2013. “The Persistence of Workplace Gender Segregation in the US.” Sociology Compass 7 (11): 889–99. http://doi.org/10.1111/soc4.12083.

Cohen, Philip N. 2014. “Recession and Divorce in the United States, 2008–2011.” Population Research and Policy Review 33 (5): 615–28. http://doi.org/10.1007/s11113-014-9323-z.

Cohen, Philip N. 2015. “How We Really Can Study Divorce Using Just Five Questions and a Giant Sample.” Family Inequality. July 22. https://familyinequality.wordpress.com/2015/07/22/how-we-really-can-study-divorce/.

Cohen, P. N., and M. R. L. Huffman. 2003. “Individuals, Jobs, and Labor Markets: The Devaluation of Women’s Work.” American Sociological Review 68 (3): 443–63. http://doi.org/10.2307/1519732.

Kalleberg, Arne L. 2013. Good Jobs, Bad Jobs: The Rise of Polarized and Precarious Employment Systems in the United States 1970s to 2000s. New York, NY: Russell Sage Foundation.

Pugh, Allison J. 2015. The Tumbleweed Society: Working and Caring in an Age of Insecurity. New York, NY: Oxford University Press.

Steven Ruggles, Katie Genadek, Ronald Goeken, Josiah Grover, and Matthew Sobek. Integrated Public Use Microdata Series: Version 6.0 [dataset]. Minneapolis: University of Minnesota, 2015. http://doi.org/10.18128/D010.V6.0.

Sarah Flood, Miriam King, Steven Ruggles, and J. Robert Warren. Integrated Public Use Microdata Series, Current Population Survey: Version 4.0. [dataset]. Minneapolis: University of Minnesota, 2015. http://doi.org/10.18128/D030.V4.0.

2 Comments

Filed under Me @ work, Research reports

Two examples of why “Millennials” is wrong

When you make up “generation” labels for arbitrary groups based on year of birth, and start attributing personality traits, behaviors, and experiences to them as if they are an actual group, you add more noise than light to our understanding of social trends.

According to generation-guru Pew Research, “millennials” are born during the years 1981-1997. A Pew essay explaining the generations carefully explains that the divisions are arbitrary, and then proceeds to analyze data according to these divisions as if are already real. (In fact, in the one place the essay talks about differences within generations, with regard to political attitudes, it’s clear that there is no political consistency within them, as they have to differentiate between “early” and “late” members of each “generation.”)

Amazingly, despite countless media reports on these “generations,” especially millennials, in a 2015 Pew survey only 40% of people who are supposed to be millennials could pick the name out of a lineup — that is, asked, “These are some commonly used names for generations. Which of these, if any, do you consider yourself to be?”, and then given the generation names (silent, baby boom, X, millennial), 40% of people born after 1980 picked “millennial.”

“What do they know?” You’re saying. “Millennials.

Two examples

The generational labels we’re currently saddled with create false divisions between groups that aren’t really groups, and then obscure important variation within the groups that are arbitrarily lumped together. Here is just one example: the employment experience of young men around the 2009 recession.

In this figure, I’ve taken three birth cohorts: men born four years apart in 1983, 1987, and 1991 — all “millennials” by the Pew definition. Using data from the 2001-2015 American Community Surveys via IPUMS.org, the figure shows their employment rates by age, with 2009 marked for each, coming at age 26, 22, and 18 respectively.

milemp

Each group took a big hit, but their recoveries look pretty different, with the earlier cohort not recovered as of 2015, while the youngest 1991 group bounced up to surpass the employment rates of the 1987s by age 24. Timing matters. I reckon the year they hit that great recession matters more in their lives than the arbitrary lumping of them all together compared with some other older “generations.”

Next, marriage rates. Here I use the Current Population Survey and analyze the percentage of young adults married by year of birth for people ages 18-29. This is from a regression that controls for year of age and sex, so it can be interpreted as marriage rates for young adults (click to enlarge).

gens-marriage

From the beginning of the Baby Boom generation to those born through 1987 (who turned 29 in 2016, the last year of CPS data), the marriage rate fell from 57% to 21%, or 36 percentage points. Most of that change, 22 points, occurred within the Baby Boom. The marriage experience of the “early” and “late” Baby Boomers is not comparable at all. The subsequent “generations” are also marked by continuously falling marriage rates, with no clear demarcation between the groups. (There is probably some fancy math someone could do to confirm that, with regard to marriage experience, group membership by these arbitrary criteria doesn’t tell you more than any other arbitrary grouping would.)

Anyway, there are lots of fascinating and important ways that birth cohort — or other cohort identifiers — matter in people’s lives. And we could learn more about them if we looked at the data before imposing the categories.

4 Comments

Filed under Research reports