In response to yesterday’s post, “This ‘Supporting Healthy Marriage,’ I do not think it means what you think it means,” Phil and Carolyn Cowan posted a comment, which I thought I should elevate to a new post.
Here is their comment, in full, with my responses.
Since the issue here is one of perspective in reporting, we (Phil Cowan and Carolyn Cowan) need to say that we were two of a group of academic consultants to the Supporting Healthy Marriage Project.
Thank you for acknowledging that. I noticed that Alan Hawkins, in his comment on the new study for Brad Wilcox’s blog, says he has “published widely on the effectiveness of marriage and relationship education programs,” but doesn’t say who paid for that voluminous research (with its oddly consistent positive findings). More about his Hawkins below.
Social scientists who want to inform the public about the results of an important study should actually inform the public about the results, not just give examples that support the author’s point of view.
Naturally, which is why I publicized the study, provided a link to it in full, and provided the examples quoted below.
It’s true as you report that there were no differences in the divorce rate between group participants and controls (we can debate whether affecting the divorce rate would be a good outcome), and that… [quoting from the original post]
“…there were no differences in the divorce rate between group participants and controls and “there were small but sustained improvements in subjectively-measured psychological indicators. How small? For relationship quality, the effect of the program was .13 standard deviations, equivalent to moving 15% of the couples one point on a 7-point scale from “completely unhappy” to “completely happy.” So that’s something. Further, after 30 months, 43% of the program couples thought their marriage was “in trouble” (according to either partner) compared with 47% of the control group. That was an effect size of .09 standard deviations. So that’s something, too. Many other indicators showed no effect. However, I discount even these small effects since it seems plausible that program participants just learned to say better things about their marriages. Without something beyond a purely subjective report — for example, domestic violence reports or kids’ test scores — I wouldn’t be convinced even if these results weren’t so weak.”
1. A slight uptick in marital satisfaction. The program moved 15% of the couples up one point. But more than 50 studies show that without intervention, marital quality, on the average goes down. And, it isn’t simply that 15% of the couples moved up one point. Since this is the mean result, some moved less (or down) but some moved up. Some also moved up from the lower point to relationship tolerability.
It is interesting that, with so many studies showing that marital quality goes down without intervention, this is not one of them. That is important because of what it implies about the sample. Quoting from the report now (p. 32):
At study entry, a fairly high percentage (66 percent) of both program and control group couples said that they had recently thought their marriage was in trouble. This percentage dropped across both research groups over time. This finding is contrary to much of the literature in the area, which generally suggests that marital distress tends to increase and that marital quality tends to decline over time. The decline in marital distress was initially steeper for program group members, and the difference between the program and control groups was sustained over time. This suggests that couples may have entered the program at low points in their relationships.
Back to the Cowans:
While the effects were small (but statistically reliable), they were hardly trivial. For instance, two years after the program, about 42% of SHM couples reported that their marriage had been in trouble recently compared to about 47% of control-group couples. That 5% difference means nearly 150 more SHM couples than control-group couples felt that their marriage was solid.
There are several problems here.
First, this paragraph appears verbatim in Hawkins’ post as well. I’m not going to speculate about how the same paragraph ended up in two places — there are some obvious possibilities — but clearly someone has not communicated the origin of this passage.
Second, this is not the right way to use “for instance.” This “for instance” refers to the only outcome of any substantial size in the entire study. It is not an “instance” of some larger pool of non-trivial results, it is the outlier. (And “solid” is not the same as not saying the marriage is “in trouble.”)
Anyway, third, this phrase is just wrong: “small (but statistically reliable)… hardly trivial.” For most of the positive outcomes they were exactly so small as to be trivial, and exactly not statistically reliable. Quoting from the report again, on coparenting and parenting (p. 39):
Table 9 shows that, of the 10 outcomes examined, only three impacts are statistically significant. The magnitudes of these impact estimates are also very small, with the largest one having an effect size of 0.07. These findings did not remain statistically significant after additional statistical tests were conducted to adjust for the number of outcomes examined. In essence, the findings suggest that there is a greater than 10 percent chance that this pattern of findings could have occurred if SHM had no effect on coparenting and parenting.
And quoting from the report again, on child outcomes (p. 41):
Table 10 shows that the SHM program had statistically significant impacts on two out of four child outcomes, but the impacts are extremely small. SHM improved children’s self-regulatory skills by 0.03 standard deviation, and it reduced children’s externalizing behavior problems by 0.04 standard deviation. … The evidence of impacts on child outcomes is further weakened by the results of subsequent analyses that were conducted to adjust for the number of outcomes examined. These findings suggest that there is a greater than 10 percent chance that this pattern could have occurred if SHM had no effect on child outcomes.
In other words, trivial effects, and not statistically reliable.
2. You say that “Without something beyond a purely subjective report…I wouldn’t be convinced even if these results weren’t so weak.” You were content to focus on two self-report measures. At the 18 month follow-up, program group members reported higher levels of marital happiness, lower levels of marital distress, greater warmth and support, more positive communication skills, and fewer negative behaviors and emotions in their interactions with their spouses, relative to control group members. They also reported less psychological abuse (though not less physical abuse). These effects continued at the 36 month follow-up [should be 30-month -pnc]. Observations of couple interaction (done only at 18 months) indicated that the program couples, on average, showed more positive communication skills and less anger and hostility than the control group. Because the quality of these interactions of the partners, the effects, though small, were coded by observers blind to experimental status of the participants, meaning that not only the self-reports suggest some positive effects but observers could identify some differences between couples in the intervention and control groups that we know are important to couple and child well-being.
I am confused by this. The description of the variables for communication skills and warmth (p. 67) describes them as answers to survey questions, not observations (e.g., “We are good at working out our differences”). I’m looking pretty hard and not seeing what is described here. The word “anger” is not in the report, and the word “hostility” only occurs with regard to parents’ behavior toward children. Someone please point me to the passage that contradicts me, if there is one.
3. When all the children were considered as one group, regardless of age, there were no effects on child outcomes, but there WERE significant effects on younger children (age 2-4), compared with children 5 to 8.5 and children 8.5 to 17. The behaviors of the younger children of group participants were reported to be – and observed to be — more self- regulated, less internalizing (anxious, depressed, withdrawn), and less externalizing (aggressive, non-cooperative, hyperactive). It seems reasonable to us that a 16 week intervention for parents might not be sufficient to reduce negative behavior in older children.
On the younger children, I discounted that because the report said (p. 42): “While the findings for the youngest children are promising, there is some uncertainty because the pattern of results is not strong enough to remain statistically significant once adjustments are made to account for the number of outcomes examined.”
4. For every positive outcome we have cited, you or any critic can find another measure that shows that the intervention had no effect. That’s part of our point here. Rather than yes or no, what we have is a complicated series of findings that lead to a complicated series of decisions about how best to be helpful to families.
That’s just not an accurate description. There are many null findings for each positive finding, and the positive findings themselves are either small, trivially small, or not statistically reliable.
4. Several times you suggest that giving couples the $9,000 per family (the program costs) would do better. Do you have evidence that giving families money increases, or at least maintains, family relationship quality? Is $9,000 a lot? Compared to what? According to the Associated Press, New York city’s annual cost per jail inmate was $167,731 last year. In other words, we are already spending billions to serve families when things go wrong, and some of the small effects of the marital could be thought of as preventive – especially at earlier stages of children’s development.
At the end of your blog, you rightly suggest a study in which giving families money is pitted in a random trial against relationship interventions. That’s a good idea, but that suggests more research. Furthermore, why must we always discuss programs in terms of yes or no, good or bad? What if we gave families $9,000 AND provided help with their relationships – and tested for the effects of a combined relationship and cash assistance.
We have lots of evidence that richer couples are less likely to divorce, of course. I don’t know that giving someone $9,000 would help with relationship quality, but I’m guessing it would at least help pay the rent or pay for some daycare.
It’s important to acknowledge that we’re not talking about research. The marriage promotion program is coming out of the welfare budget, not NIH or NSF. This study is a small part of it. Hundreds of millions of dollars have been spent on this, of which the studies account for a small amount. If this boondoggle continues, and they continue to study it, then they should include the cash-control group.
5. It seems to us that as a social scientist, you would want to ask “what have we learned about helping families from this study and from other research on couple relationship education?” We would suggest that we’ve learned that the earlier Building Strong Families program for unmarried low-income families had low attendance and no positive effects. A closer reading of those reports suggest that many of the unmarried partners were not in long-term relationships and were not doing very well at the outset. Perhaps it was a long-shot to offer some of them relationship help. We’ve also learned that the Strengthening Healthy Marriage program for married low-income families had some small but lasting effects on both self-reported and observed measures of their relationship quality (we think that the researchers learned something from the earlier study). And, notably, we’ve learned that there seemed to be some benefits for younger children when their parents took advantage of relationship strengthening behaviors.
We always learn something. See my comments above for why this is a stretch. I would be happy to see, and even pay for, research on what helps poor families. We already do some of that, through scientific agencies. My objection is not to the research, but to the program that it is studying, which takes money away from things we know are good.
Here is their last word — as good a defense as any for this program.
We know from many correlational studies that when parents are involved in unresolvable high level conflict, or are cold and withdrawn from each other, parenting is likely to be less effective, and their children fare less well in their cognitive, emotional, and social development. It was not some wild government idea that improving couple relationships could have benefits for children. Evidence in many studies and meta-analyses of studies of couple relationship interventions in middle-class families, and more recently for low-income families, have also been shown to produce benefits for the couples themselves — and for their kids. This was not a government program to force marriage on poor families. The participants were already married. It was a program that offered free help because maintaining good relationships is hard for couples at any level, but low-income folks have fewer financial resources to get all kinds of help that every family needs.
We are not suggesting that strengthening family relationships alone is a magic bullet for improving the lot of poor families. But, in our experience over the past many years, it gives the parents some tools for building more productive couple and parent-child relationships, which gives both the parents and their children more confidence and hope.
What we need to learn is how to do family relationship strengthening more effectively, and how to combine that activity with other approaches, now being tried in isolated silos of government, foundations, and private agencies, in order to make life better for parents and their kids.
In our view, trumpeting the failure of Supporting Healthy Marriage by focusing on a few of the negative findings doesn’t help move us toward that goal.