Category Archives: In the news

Why aren’t female Charlies killing the name Charles?

Geena Davis as Charly in The Long Kiss Goodnight, 1996

Geena Davis as the best female movie Charly (The Long Kiss Goodnight, 1996)

Charles was a top-10 name for boys in the U.S. into the 1950s, and it has always been more than 99% male. American parents have shown no interest in breaking down that barrier. However, since the early 2000s, they have started naming their daughters Charlie, Charlee, Charleigh, Charli, Charley, and Charly. Last year 4,882 girls got one of those names, which is more than Anna or Samantha (and more than twice as many as were named Mary).

Near the start of that wave, the Disney TV show Good Luck Charlie — about a married, White couple with four children, the last of which was named Charlotte (nick-named Charlie) — debuted in 2010, and peaked in 2012, with 7.5 million viewers on one Sunday.

promo image from Disney show Good Luck Charlie

But Charlie has not become a girls’ name. As a I reported last week, Charlie is now the most common androgynous name (between 40% and 60% female), with 3,556 births split almost equally between boys and girls. The other variations are more female: All versions of Charlie together are 74% female.

So, with girls pouring in, are parents heading for the exits, as we saw with names like Taylor and Kim? Not yet. Charles is much less common than it once was, but it has not slipped appreciably since girls started picking up its nickname. Here are the trends back to 1880:


As girl Charlies have gained ground, in fact, even the spelling Charlie is rising in the rankings for boys, up to 218th last year from 306th a decade ago. Parents are now naming their boys Charlie at twice the rate they did in 1968. This figure zooms in on the Charlie wars for the last 50 years. (For this I combine all the spellings for boys, but 92% of them are Charlies.)


If Charlie follows the path of previous gender battleground names, however (see Tristan Bridges’ two posts on this from last week), we might still see a male crash, or a female crash, or both. Androgyneity has historically been unstable in this system, especially when (from parents’ point of view) femininity contaminates a masculine space.

If the collapse doesn’t come, maybe it will be because both sides have gender unambiguous reinforcements: Charles for boys (99.8% male), and Charlotte for girls (99.9% female). So parents who like the name Charlie, including those who may choose it precisely because of its androgynous image, also know they have a gendered space they or their children can retreat to if necessary.

Data for this analysis are from the Social Security Administration. The data files and my Stata code are available on the OSF, here.

Leave a comment

Filed under In the news

Taylor, Kim and the declining sex binary in names

I’ll get to Taylor and Kim, but first more general data.

How gender binary is the practice of naming babies in the U.S.? Very. In 2018, 76% of babies were given names that were more than 99% male or female, according to data from the Social Security Administration (which releases name counts for only two sex categories).an4

That looks extreme (kurtosis = 1.06!), but 76% is actually the lowest that number has ever been. Here is the trend in babies with >99%-typed names back to 1880 (note the y-axis starts at 70%):

androgynous update 2019.xlsx

How important are the trends in name binaryness?

In her New York Times article on the rise nonbinary gender identities among young Americans, and a follow-up, Amy Harmon interviewed nonbinary people named Flynn, Keyden, and Charley.  (In 2018, 85% of the babies given the name Charley were identified as girls at birth, compared with 0.2% of those named Charles and 52% of those named Charlie — the most androgynous spelling of the three).*

One notable development in the striking rise of nonbinary identities has been the supportiveness of some parents. But are such parents reacting positively to their children’s development, or — not waiting to be prompted — giving their babies more androgynous names at birth? Extreme sex-dominance of names has become less common, but still dominants. And truly androgynous names, say, between 40% and 60% associated with one sex, are very rare.

Over the long run, the U.S. is becoming a less sex-binary society, but that evolution is far from direct. From 1950 to 1975 (the period featured in Jo Paoletti’s book on the unisex movement in fashion), the percentage of babies given names that were less than 95% associated with a dominant sex almost doubled, to 7.4%. And since then it has increased to 13%. However, the percentage given names that are between 40% and 60% sex-dominant remains barely over 1%. Here are those trends, back to 1940, using data from the Social Security Administration.

androgynous update 2019.xlsx

Are the parents giving androgynous names even doing it on purpose? I’m not sure how we can tell. Despite phonetic cues, which are guides but not rules, the gender of a name is ultimately determined by the gender of the people who have it. When names are very rare, it’s likely parents just don’t know the sex of the other babies getting the name. Maybe parents giving the names Charlie, Finley, and Dakota — the most popular androgynous names — chose them because they like their androgynousness. But others, like Justice or Ocean, probably just don’t have stable genders attached to them. And the conventional wisdom (from Stanley Lieberson and colleagues) is that androgynous names are not stable — they either swing toward one gender or fade away.

Here are the most common names between 40% and 60% sex dominant in 2018. Maybe blog readers can say something about the motives of the parents using these.


In that 2000 paper by Lieberson et al., which used data on Whites only from Illinois, through 1989 (how did people ever do sociology with such paltry data available to them?), they reported that the parents of girls are more likely to assign them androgynous names than the parents of boys are. That is consistent with the idea that the penalty for gender non-conformity is greater for boys than for girls, that femaleness is the contaminant more than non-conformity — which is why the move toward gender equality meant women wearing pants more than men wearing dresses. But now that may have reversed. Boys are now more likely to be given names that are less than 95% sex-dominant.

androgynous update 2019.xlsx

I think this is a good avenue for exploring changes in gender attitudes, including regarding nonbinary identities and gender conformity. This will require looking beyond name count trends, obviously.

Kim and Taylor

Another avenue for research involves name contamination (another Lieberson idea, which Tristan Bridges and I have written about; see also earlier posts). From a wide angle, it’s easy to see that androgynous names usually don’t stay that way, or they disappear. But the specific mechanism may be that parents of boys are spooked by the rising femininity of a name and thus turn away from it.

In that Lieberson et al article they cite the case of Kim, which (among Whites in Illinois) was increasing among both boys and girls before Kim Novak burst on the scene in 1954, as a sexy female movie star. And they also observe the rise of Taylor, just beginning by the end of their dataset, in 1989. Now we can update that, and expand it to the whole country, to see the amazing similarity of the cases. Amazing similarity, that is, if you remember who Taylor Dayne is.

androgynous update 2019.xlsx

Taylor Dayne was a big deal very briefly, at the end of the 1980s, with three gold singles, “Tell It to My Heart”, “I’ll Always Love You,” and “Love Will Lead You Back.” She was nominated for a Best R&B Vocal Performance Grammy for “I’ll Always Love You,” in 1988 (losing to Aretha Franklin). Did Taylor Dayne kill Taylor — right after giving us Taylor Swift (born 1989)? I’m open to other suggestions, but I think it fits. She was a big star briefly, and the music she made (no offense) didn’t turn out to be the most memorable of the period, which was awkwardly sandwiched between decades. There is a difference in scale between the cases, as Taylor peaked at the #6 most popular girls’ name and the 51st most popular boys’ name in the mid-1990s. Also, Taylor still ranks, and is still 18% male, while Kim virtually disappeared. So maybe the dynamic is a little different now.

Anyway, I love the idea that Taylor Dayne killed Taylor, because she isn’t even a real Taylor — she was born Leslie Wunderman (were any other Jews nominated for R&B vocalist Grammys?), and only chose the name Taylor in 1987, as it was already spiking upward. It also raises an issue relevant to the question of nonbinary-supporting parents: name changes. If gender identities are increasingly fluid, maybe names will be, too. In addition to being less sex-typed, names may also become less permanent. Just a thought.

* In the original version of this post I mistakenly wrote that 20% of Charles’s were girls, it’s actually 0.2% (I read .19 as a proportion instead of a percent).

Data and code for this analysis are on the Open Science Framework here:


Filed under In the news

Fertility rate implications explained

(Sorry for the over-promising title; thanks for the clicks.)

First where we are, then projections, with figures.

For background: Caroline Hartnett has an essay putting the numbers in context. Leslie Root has a recent piece explaining how these numbers are deployed by white supremacists (key point: over-hyping the downside of lower fertility rates has terrible real-world implications).


The National Center for Health Statistics released the 2018 fertility numbers yesterday, showing another drop in birth rates, and the lowest fertility since the Baby Boom. We are continuing a historical process of moving births from younger to older ages, which shows up as fewer births in the transition years. I illustrate this each year by updating this figure, showing the relative change in birth rates by age since 1989:

change in birthrates by age 1989-2016.xlsx

Historically, postponement was associated with reduction in lifetime births — which is what really matters for population trends. When people were having lots of children, any delay reduced the total number. With birth rates around two per woman, however, there is a lot more room for postponement — a lot of time to get to two. (At the societal level, both reduction and postponement are generally good for gender equality, if women have good health and healthcare.)

This means that drops in what we demographers call “period” fertility (births right now) are not the same as drops in “completed” fertility (births in a lifetime), or falling population in the long run. The period fertility measure most often used, the unfortunately named total fertility rate (TFR), is often misunderstood as an indicator of how many children women will have. It is actually how many births they are having right now, expressed in lifetime terms (I describe it in this video, with instructions).

Lawrence Wu and Nicholas Mark recently showed that despite several periods of below “replacement” fertility (in terms of TFR), no U.S. cohort of women has yet finished their childbearing years with fewer than two births per woman. Here is the completed fertility of U.S. women, by year of birth, as recorded by the General Social Survey. By this account, women born in the early 1970s (now in their late-forties by 2018) have had an average of 2.3 children.

Stata graph

Whether our streak of over-two completed fertility persists depends on what happens in in the next few years (and of course on immigration, which I’ll get to).

Last year at this time I summed up the fertility situation and concluded, “sell stock now,” because birth rates fell for women at all ages except over 40. That kind of postponement, I figured, based on history, reflected economic uncertainty and thus was an ill omen for the economy. The S&P 500 is up 5% since then, which isn’t bad as far as my advice goes. And I’m still bearish based on these birth trends (I bet I’ll be right before fertility increases).


It is very hard to have an intuitive sense of what demographic indicators mean, especially for the future. So I’ve made some projections to show the math of the situation, to get the various factors into scale. My point is to show what the current (or future) birth rates imply about future growth, and the relative role of immigration.

These projections run from 2016 to 2100. I made them using the Census Bureau’s Demographic Analysis and Population Projection System software, which lets me set the birth, death, and migration rates.* I started with the 2016 population because that’s the most recent set of life tables NCHS has released for mortality. Starting in 2018 I apply the current age-specific birth rates.

First, the most basic projection. This is what would happen if birth rates stayed the same as those in 2018 and we completely cut off all immigration (Projection A), or if we had net migration running at the current level of just under +1 million each year, using Census estimates for age and sex of the migrants (Projection B).


From the 2016 population of 323 million, if the birth rates by age in 2018 were locked in, the population would peak at 329 million in 2029 and then start to decline, reaching 235 million by 2100. However, if we maintain current immigration levels (by age and sex), the population would keep growing till 2066 before tapering only slightly. (Note this assumes, unrealistically, that the immigrants and their children have the same birth rates as the current population; they have generally been higher.) This the most important bottom line: there is no reason for the U.S. to experience population decline, with even moderate levels of immigration, and assuming no rebound in fertility rates. Immigration rates do not have to increase to maintain the current population indefinitely.

Note I also added the percentage of the population over age 65 on the figure. That number is about 16% now. If we cut off immigration and maintain current birth rates, it would rise to 25% by the end of the century, increasing the need for investment in old age stuff. If we allow current migration to continue, that growth is less and it only reaches 23%. This is going up no matter what.

To show the scale of other changes that we might expect — again, not predictions — I added a few other factors. Here are the same projections, but adding a transition to higher life expectancies by 2080 (using Japan’s current life tables; we can dream). In these scenarios, population decline is later and slower (and not just at older ages, since Japan also has lower child mortality).


Under these scenarios, with rising life expectancies, the old population rises more, to between 27% and 29%. Generally experts assume life expectancies will rise more than this, but that’s the assumed direction (now, unbelievably, in doubt).

Finally, I’ve been assuming birth rates will not fall further. If what we’re seeing now is fertility postponement, we wouldn’t expect much more decline. But what if fertility keeps falling? Here is what you get with the assumptions in Projection D, plus total fertility rates falling to 1.6, either by 2030 or 2050. As you can see, in the 1.6 to 1.8 range, the effects on population size aren’t great in this time scale.


Conclusion: We are on track for slowing population growth, followed by a plateau or modest decline, with population aging, by the end of the century, and immigration is a bigger question than fertility rates, for both population growth and aging.


In a global context where more people want to come here than want to leave (to date), worrying about low birth rates tends to lend itself to myopic, religious, or racist perspectives which I don’t share. I don’t think American culture is superior, whites are in danger of extinction, or God wants us to have more children.

I do not agree with Dowell Myers, who was quoted yesterday as saying, “The birthrate is a barometer of despair.” That even as some people are having fewer children than they want, or delaying childbearing when they would rather not. In the most recent cohort to finish childbearing, 23% gave an “ideal number of children for a family to have” that was greater than the number they had, and that number has trended up, as you can see here:

Stata graph

Is this rising despair? As individuals, people don’t need to have children any more. Ideally, they have as many as they want, when they want, but they are expensive and time consuming and it’s not surprising people end up with fewer than they think “ideal.” Not to be crass about it, but I assume the average person also has fewer boats than they consider ideal.

And how do we know what is the right level of fertility for the population? As Marina Adshade said on Twitter, “Did women actually have a desire for more children in the past? Or did they simply lack the bargaining power and means to avoid births?”

However, to the extent that low birth rates reflect frustrated dreams, or fear and uncertainty, or insufficient support for families with children, of course those are real problems. But then let’s name those problems and address them, rather than trying to change fertility rates or grow the population, which is a policy agenda with a very bad track record.

* I put the DAPPS file package I created on the Open Science Framework, here. If you install DAPPS you can open this and look at the projections output, with graphs and tables and population pyramids.


Filed under In the news

Do rich people like bad data tweets about poor people? (Bins, slopes, and graphs edition)

Almost 2,000 people retweeted this from Brad Wilcox the other day.


Brad shared the graph from Charles Lehman (who noticed later that he had mislabeled the x-axis, but that’s not the point). First, as far as I can tell the values are wrong. I don’t know how they did it, but when I look at the 2016-2018 General Social Survey, I get 4.3 average hours of TV for people in the poorest families, and 1.9 hours for the richest. They report higher highs (looks like 5.3) and lower lows (looks like 1.5). More seriously, I have to object to drawing what purports to be a regression line as if those are evenly-spaced income categories, which makes it look much more linear than it is.

I fixed those errors — the correct values, and the correct spacing on the x-axis — then added some confidence intervals, and what I get is probably not worth thousands of self-congratulatory woots, although of course rich people do watch less TV. Here is my figure, with their line (drawn in by hand) for comparison:


Charles and Brad’s post got a lot of love from conservatives, I believe, because it confirmed their assumptions about self-destructive behavior among poor people. That is, here is more evidence that poor people have bad habits and it’s just dragging them down. But there are reasons this particular graph worked so well. First, the steep slope, which partly results from getting the data wrong. And second, the tight fit of the regression line. That’s why Brad said, “Whoa.” So, good tweet — bad science. (Surprise.) Here are some critiques.

First, this is the wrong survey to use. Since 1975, GSS has been asking people, “On the average day, about how many hours do you personally watch television?” It’s great to have a continuous series on this, but it’s not a good way to measure time use because people are bad at estimating these things. Also, GSS is not a great survey for measuring income. And it’s a pretty small sample. So if those are the two variables you’re interested in, you should use the American Time Use Survey (available from IPUMS), in which respondents are drawn from the much larger Current Population Survey samples, and asked to fill out a time diary. On the other hand, GSS would be good for analyzing, for example, whether people who believe the Bible is the “the actual word of God and is to be taken literally, word for word” watch TV more than those who believe it is “an ancient book of fables, legends, history, and moral precepts recorded by men” (Yes, they do, about an hour more.) Or looking at all the other social variables GSS is good for.

On the substantive issue, Gray Kimbrough pointed out that the connection between family income and TV time may be spurious, and is certainly confounded with hours spent at work. When I made a simple regression model of TV time with family income, hours worked, age, sex, race/ethnicity, education, and marital status (which again, should be done better with ATUS), I did find that both hours worked and family income had big effects. Here they are from that model, as predicted values using average marginal effects.

tv work faminc

The banal observation that people who spend more time working spend less time watching TV probably wouldn’t carry the punch. Anyway, neither resolves the question of cause and effect.

Fits and slopes

On the issue of the presentation of slopes, there’s a good lesson here. Data presentation involves trading detail for clarity. And statistics have both have a descriptive and analytical purpose. Sometimes we use statistics to present information in simplified form, which allows better comprehension. We also use statistics to discover relationships we couldn’t otherwise — such as multivariate relationships that you can’t discern visually. The analyst and communicator has to choose wisely what to present. A good propagandist knows what to manipulate for political effect (a bad one just tweets out crap until they get lucky).

Here’s a much less click-worthy presentation of the relationship between family income and TV time. Here I truncate the y-axis at 12 hours (cutting off 1% of the sample), translate the binned income categories into dollar values at the middle of each category, and then jitter the scatterplot so you can see how many points are piled up in each spot. The fitted line is Stata’s median spline, with 9 bands specified (so it’s the median hours at the median income in 9 locations on the x-axis). I guess this means that, at the median, rich people in America watch about an hour of TV per day less than poor people, and the action is mostly under $50,000 per year. Woot.

gss tv income

Finally, a word about binning and the presentation of data (something I’ve written about before, here and here). We make continuous data into categories all the time, starting from measurement. We usually measure age in years, for example, although we could measure it in seconds or decades. Then we use statistics to simplify information further, for example by reporting averages. In the visual presentation of data, there is a particular problem with using averages or data bins to show relationships — you can show slopes that way nicely, but you run the risk of making relationships look more closely correlated than they are. This happens in the public presentation of data when analysts are showing something of their work product — such as a scatterplot with a fitted line — to demonstrate the veracity of their findings. When they bin the data first, this can be very misleading.

Here’s an example. I took about 1000 men from the GSS, and compared their age and income. Between the ages of 25 and 59, older men have higher average incomes, but the fit is curved with a peak around 45. Here is the relationship, again using jittering to show all the individuals, with a linear regression line. The correlation is .23

c1That might be nice to look at but it’s hard to see the underlying relationship. It’s hard to even see how the fitted line relates to the data. So you might reduce it by showing the average income at each age. By pulling the points together vertically into average bins, this shows the relationship much more clearly. However, it also makes the relationship look much stronger. The correlation in this figure is .65. Now the reader might think, “Whoa.”

c2Note this didn’t change the slope much (it still runs from about $30k to $60k), it just put all the dots closer to the line. Finally, here it is pulling the averages together in horizontal bins, grouping the ages in fives (25-29, 30-34 … 55-59). The correlation shown here is .97.


If you’re like me, this is when you figured out that reducing this to two dots would produce a correlation of 1.0 (as long as the dots aren’t exactly level).

To make good data presentation tradeoffs requires experimentation and careful exposition. And, of course, transparency. My code for this post is available on the Open Science Framework here (you gotta get the GSS data first).


Filed under In the news

Let’s raise the legal age of marriage in Maryland

Today I sent the following letter to the Maryland House Judiciary Committee, which is scheduled to hold a hearing on these bills tomorrow. Under current law in Maryland, marriage is permitted as young as age 15 with parental consent and evidence of pregnancy or childbirth, and age 16-17 with one or the other, and these exceptions are granted by county clerks rather than judges. By my calculations, from 2008 to 2017, based on the American Community Survey, the annual marriage rate for girls ages 15-16 was 5 per 1000 in Maryland, behind only Hawaii, Nevada, and West Virginia. HB 855 would raise the age at marriage to 18, while HB 1147 would establish an emancipated minor status, requiring review by a judge, under which 17-year-olds could marry. For more on the effort to end child marriage in the U.S., visit the Tahirih Justice Center site.

March 6, 2019

To the House Judiciary Committee:

I write in support of Maryland House Bill 855, concerning age requirements for marriage; and House Bill 1147, concerning the emancipation of minors.

My relevant background

  • I am a Professor of Sociology, and family demographer, at the University of Maryland, College Park, where I have been on the faculty since 2012. I also earned my PhD at the University of Maryland, College Park, in 1999, and I live in Silver Spring.
  • I have written two books and many peer-reviewed articles on family sociology, including on topics related to marriage and divorce, family structure, gender inequality, health and disability, infant mortality, adoption, race and ethnicity, and the division of labor.
  • I have served as a consultant to the U.S. Census Bureau on the measurement of family structure, and testified before Congress on gender discrimination.

My support of the bills

In general, the rise of the age at marriage and childbearing in U.S. have been positive developments for women and children, allowing mothers to devote more years of early adulthood to education and career development, which is beneficial to both adults and their children.

Very early marriage in particular is detrimental to women’s opportunity to finish high school. More urgently, research and service work shows that very early marriage is usually unwanted, coerced, or forced. Very young women should not be expected to protect themselves legally or socially from such impositions, which are usually from older men and dominant family members. Very early marriage often follows statutory rape or other sexual assault, compounding rather than mitigating the harms of these crimes against children. Rather than protect a young woman, very early marriage instead provides protection from scrutiny for her abuser(s), and makes state intervention on her behalf all the more difficult to accomplish in the following years. The privacy and discretion we bestow upon families has benefits, of course, but it also makes the family a dangerous place for the victims of abuse.

Research, including my own, unequivocally shows that very early marriage leads to the highest rates of divorce. I have written several papers on divorce rates in the United States (see references). For illustration, here I used the same method of analysis, and present only the relationship between age at marriage and incidence of divorce. As you can see from the figure, divorce rates are highest by far – estimated at 2.5% per year – for women who married before age 18. This is about twice as high as divorce rates for those who marry in their 30s, for example. (These estimates hold constant other factors; data and code are available here.) The evidence is very strong.

predicted odds of divorce by aam

I only reluctantly support increasing state restrictions on women’s freedom with regard to family choices, but in the case of marriage before adulthood I see the restriction as a protection from the exploitative behavior of others, rather than an imposition on young women’s rights.

At present in Maryland, exceptions allowing marriage before age 18 – based on pregnancy and/or parental consent – are granted without adequate legal review. Together, HB 855 and HB 1147 would set the minimum age at marriage in Maryland to 18, with an exception only for court emancipated minors of age 17. This would improve the state’s protection of young women from unwanted, coerced, forced, or ill-advised marriages without unduly restricting the freedom to marry for younger women (age 17), who may be emancipated by a court after a direct application and careful review of circumstances.

I urge your support for these bills. I would be happy to provide further information or testimony at your request.


Philip N. Cohen


Cohen, Philip N. 2015. “Recession and Divorce in the United States, 2008-2011. Population Research and Policy Review 33(5):615-628.

Cohen, Philip N. 2018. “The Coming Divorce Decline.” SocArXiv. November 14. To be presented at the Population Association of America meetings, 2019.


Filed under In the news, Politics

About Charles Murray: Is a White man’s cross burning as disqualifying as blackface?

“People are saying” that we need to think about how to interpret, and possibly punish, past racism, relative to current racism. This is as much about the meaning of “past” as it is about the meaning of “racism.” It’s about individual suspected racists — specifically leading Virginia Democrats — and about the intersection of individual and institutional racism, as preserved and displayed in yearbooks, as in this photo of the University of Illinois KKK chapter in 1924, which included representation from each fraternity on campus:

Politicians are a special case, because their authority is in theory dependent on the legitimating consent of the governed. On the other hand are regular individuals, for whom being labeled a racist is among the harshest reputational penalties we have. More important than individuals is how they add up to groups, organizations, and institutions.

Then there are powerful individuals representing institutional interests, such as Charles Murray, who spent decades on the dole of non-profit organizations funded by the foundations of the rich (in other words, you). He built an extremely influential career blaming poverty on inborn deficiencies (“born lazy“) among the Black poor and providing scientific cover for dismantling government support for meeting their needs.

Why burn that cross

In the grand scheme maybe it doesn’t matter whether Charles Murray (now an emeritus at age 76) is, or was, racist in his heart — his work was racist in its effects (White supremacist terrorist Dylann Roof parroted Murray in his rationale for murdering Black people in church.) However, he and his defenders have always impugned those who assign racist motives to his work. He clearly believes in a biological racial hierarchy in genetic intelligence, which is an old-fashioned definition of racism. The new scientific racists, a coalition that includes Murray, defends itself from that charge by claiming it’s not racist if it’s true, and it has fallen to human geneticists to debunk their claims. The charge of racism has always weakened the legitimacy of Murray and his compatriots, and narrowed their reach. As I think it should — you don’t need to know what was in his heart to think his work was terrible, but it’s relevant.

Shawn Fremstad reminded me that Murray and his friends burned a cross in 1960, which seems like a good thing to dredge up during racist-yearbook week. Here is the very cursory story, in a 1994 New York Times profile for the release of his book The Bell Curve.

While there is much to admire about the industry and inquisitiveness of Murray’s teen-age years, there is at least one adventure that he understandably deletes from the story — the night he helped his friends burn a cross. They had formed a kind of good guys’ gang, “the Mallows,” whose very name, from marshmallows, was a play on their own softness. In the fall of 1960, during their senior year, they nailed some scrap wood into a cross, adorned it with fireworks and set it ablaze on a hill beside the police station, with marshmallows scattered as a calling card.

[Denny] Rutledge recalls his astonishment the next day when the talk turned to racial persecution in a town with two black families. “There wouldn’t have been a racist thought in our simple-minded minds,” he says. “That’s how unaware we were.”

A long pause follows when Murray is reminded of the event. “Incredibly, incredibly dumb,” he says. “But it never crossed our minds that this had any larger significance. And I look back on that and say, ‘How on earth could we be so oblivious?’ I guess it says something about that day and age that it didn’t cross our minds.”

This is a very incomplete story, which doesn’t even tell us who first told the tale of the cross burning, or what reason that person gave for it, or how they picked the location. But reading this, my sociological opinion is that “dumb” is likely a dodge; and my sociological question is, if they had no idea of the “larger significance” of cross burning, in 1960, why do it? There were lots of dumb things to do. My sociological approach to this question is to investigate the context in which this cross burning occurred, both in the social environment and in Murray’s life course trajectory.

The fall of 1960, the beginning of Murray’s senior year of high school, was when he would have been applying to Harvard, which he went off to in 1961 (he was a history major). It was also a time when cross burning was in the news a lot, including in Iowa.

 The 1960 Census recorded 15,000 people in the idyllic cross-burning town of Newton, where Murray’s father was a Maytag executive. And there were only 22 Black people recorded in Jasper county (where Newton is the principal city). Does this mean race was not an issue in the minds of Murray’s gang? I’m very doubtful. Blacks were a noticeable, and noticeably growing, presence in Iowa cities, including Des Moines, just 30 miles from Newton. (The new Interstate 80 hadn’t connected Newton and Des Moines yet, but sections of it were already built west of Des Moines, and it was penciled in on the map.) During the 1950s the state’s nonwhite population increased about 70%, from 17,000 to 29,000. In fact, the 1950s were the biggest decade for Black migration to Iowa. Almost all of them lived in urban areas, including Des Moines. The city had 209,000 people, of which 10,700 (5%) were nonwhite (mostly Black) by 1960.

So, do you think a 1960 White executive’s family would have heard anything about the nonwhite population of the nearest city nearly doubling in the previous decade? Did aw-shucks Murray and his pool hall buddies know about all that big city stuff?

We have some other evidence from which to speculate. Murray traveled around the state, and even the country, in his high school years. He was on Newton High School’s “Crack Debate Team” that won several statewide tournaments, including one at the University of Iowa in Iowa City in April 1960. And that summer the debate team roadtripped to California, courtesy of the Chamber of Commerce, for a national tournament. (What did they debate, anyway?)

Picture of Newton debate team, including Charles Murray, in 1960Des Moine Register, June 15, 1960.

So in 1960 Murray was the son of an executive, and a debate team champion, traveling the state and country, and applying to Harvard, while living in the next county over from a city with a booming Black population. Oh, and it was 1960: the year civil rights protesters staged sit-ins in dozens of cities across the south, from February through April.

By my count there were 55 articles in the Des Moines Register/Tribune archives mentioning cross burning during his high school years, 1955 to 1960. In fact, there were a number of stories about an Iowa City incident, where in April 1960 (yes, that April 1960), eight Beta Theta Pi frat brothers burned a cross on the lawn of the assistant director of student affairs, whose office was “instrumental in the effort to remove race restrictions from the constitutions of several fraternities at the university.” After briefly suspending the men, the university declared it a “prank” and reinstated them on probation:

Clips from Chicago Daily Tribune and Des Moines Tribune, April and May 1960

Maybe it was a pure coincidence having nothing to do with race that the eight frat brothers burned a cross in their “prank.” But why a cross? Also, it was a few weeks after students picketed stores right there in Iowa City to support the sit-ins.

washington times herald article showing rash of cross burnings in south, and mentioning picketers supporting sit-ins in iowa city.

I see a possible parallel between the frat boys and the cross burning by Murray’s marshmallow gang. The story is they had no idea it was about race; decades later, this is the story they recite. Some key White adults helped keep the narrative from getting out of hand. I’d bet the incident didn’t make it into Murray’s Harvard admissions packet, either in his personal essay or in the form of a criminal record. Even though there was “talk” in town the next day.

And they went on about their lives. Murray isn’t an elected office holder, and may be retired. Maybe it’s water under the racist.

Incidentally, I noticed that one of the University of Iowa cross-burning frat boys, Joel E. Swanson, seems to have gone on to become a state district court judge. (I don’t know what happened to their disorderly conduct charge.) He was a freshman in 1960, got his law degree at the University in 1967, while serving in the National Guard, and worked as a lawyer in his home town of Lake City, eventually became a judge and then retiring in 2012. Also, they have recipes.


1 Comment

Filed under In the news