Showing posts with label studies show. Show all posts
Showing posts with label studies show. Show all posts

Wednesday, March 20, 2019

Does marijuana cause psychosis, and if so, should we keep it illegal?

A new study supposedly "shows that consuming pot on a daily basis and especially using high-potency cannabis increases the odds of having a psychotic episode later." Past studies have also found "that more frequent use of pot is associated with a higher risk of psychosis — that is, when someone loses touch with reality."

However, it's unclear whether this is causation or just correlation. That NPR article says: "One critique of the theory that weed contributes to psychosis risk has been that while more people are using weed worldwide, there hasn't been a corresponding rise in rates of psychosis." On the other hand, "cities with more easily available high-THC weed do have a higher rate of new diagnoses of psychosis."

I’d like to see drugs legalized, not because I want more people to do drugs, but because I want more people to feel more free to speak out about their struggles with drug addiction, the way people now feel free to talk about their addictions to things that are legal: alcohol, cigarettes, food, etc. These can all be detrimental to your health, but making people fear getting locked up if they’re open about their failings in these areas is not a good plan for making society healthier.

The current illegality of some drugs like marijuana also makes them seem cool and rebellious, and makes anyone who criticizes them seem lame and authoritarian. If drugs were legal, they'd lose some of their allure.


(Photo of woman selling cannabis in Assam, India from Wikimedia Commons.)

Friday, October 14, 2011

The statistical point that neuroscience papers get wrong half the time

Ben Goldacre explains.

The explanation does take a few dense paragraphs; Goldacre admits his own writing is going to cause some "pain" for readers, and I had to read it twice before I got it. But it's worth it for anyone interested in brain studies.

UPDATE: I posted the article to Metafilter, where a commenter named "valkyryn" takes another shot at explaining it:

Say we've got two samples, A and B. For our sample size and subject matter, if we expose it to chemical X, any detected results need to be above 20 Units (U) to be statistically significant.

We expose A to X, and we get a result of 30U. That's statistically significant.

We expose B to X, and we get a result of 15U. That isn't statistically significant.

But note that the result for A is less than 20U from the result for B. This means that while we did have a statistically significant finding for A, the difference between A and B is not statistically significant.

This significantly diminishes the value of our findings, as the math only supports a relatively unambitious claim. We want to say that A and B are different, but the math won't let us, as the difference in reaction between A and B is too small. We're left with a relatively uninteresting finding, namely that something seems to happen with A and X, but we aren't sure how much that matters.

Careers tend not to be made out of this kind of finding, and the author is implying that because scientists have an interest in careers, they're making more ambitious claims than their studies actually permit.
IN THE COMMENTS: LemmusLemmus illustrates the problem with this graph:


LemmusLemmus explains:
The result for condition A is significantly different from zero (i.e., it "is significant"), while the result for condition B is not (the confidence interval for A does not include zero, but the confidence interval for B does). However, the two confidence intervals overlap, which means that the results for conditions A and B are not significantly different from each other.

Sunday, July 10, 2011

Is the death penalty "racist"?

The New York Times printed an op-ed yesterday with the headline:

Death Penalty, Still Racist and Arbitrary.
The piece begins:
LAST week was the 35th anniversary of the return of the American death penalty. It remains as racist and as random as ever.

Several years after the death penalty was reinstated in 1976, a University of Iowa law professor, David C. Baldus (who died last month), along with two colleagues, published a study examining more than 2,000 homicides that took place in Georgia beginning in 1972. They found that black defendants were 1.7 times more likely to receive the death penalty than white defendants and that murderers of white victims were 4.3 times more likely to be sentenced to death than those who killed blacks.
I don't think there's much question about that latter point. But the biggest concern about whether the death penalty is racist would not seem to be about the race of the homicide victim.

Rather, when people call the death penalty "racist," the suggestion is that a defendant who's black is more likely to be executed than a defendant who's white, if all factors other than race are essentially the same.

Is that true?

The New York Times itself published an article (in 2008) that took a much more balanced and fact-based look at this issue:
About 1,100 people have been executed in the United States in the last three decades. Harris County, Tex., which includes Houston, accounts for more than 100 of those executions. . . .

A new study to be published in The Houston Law Review this fall has found two sorts of racial disparities in the administration of the death penalty there, one commonplace and one surprising.

The unexceptional finding is that defendants who kill whites are more likely to be sentenced to death than those who kill blacks. More than 20 studies around the nation have come to similar conclusions.

But the new study also detected a more straightforward disparity. It found that the race of the defendant by itself plays a major role in explaining who is sentenced to death.

It has never been conclusively proven that, all else being equal, blacks are more likely to be sentenced to death than whites in the three decades since the Supreme Court reinstated the death penalty in 1976. Many experts, including some opposed to the death penalty, have said that evidence of that sort of direct discrimination is spotty and equivocal.

But the author of the new study, Scott Phillips, a professor of sociology and criminology at the University of Denver, found a robust relationship between race and the likelihood of being sentenced to death even after the race of the victim and other factors were held constant.

His statistics have profound implications. For every 100 black defendants and 100 white defendants indicted for capital murder in Harris County, Professor Phillips found that an average of 12 white defendants and 17 black ones would be sent to death row. In other words, Professor Phillips wrote, “five black defendants would be sentenced to the ultimate sanction because of race.”

Scott Durfee, the general counsel for the Harris County district attorney’s office, rejected Professor Phillips’s conclusions and said that district attorneys there had long taken steps to insulate themselves from knowing the race of defendants and victims as they decided whether to seek the death penalty.

“To the extent Professor Phillips indicates otherwise, all we can say is that you would have to look at each individual case,” Mr. Durfee said. “If you do that, I’m fairly sure that you would see that the decision was rational and reasonable.”

Indeed, the raw numbers support Mr. Durfee.

John B. Holmes Jr., the district attorney in the years Professor Phillips studied, 1992 to 1999, asked for the death sentence against 27 percent of the white defendants, 25 percent of the Hispanic defendants and 25 percent of the black defendants.
I agree with Durfee's statement that you'd need to look at the merits of each case. Phillips did purport to do exactly that, as the Times explained:
Professor Phillips said that the numbers suggesting evenhandedness in seeking the death penalty did not tell the whole story. Once the kinds of murders committed by black defendants were taken into consideration — terrible, to be sure, but on average less heinous, less apt to involve vulnerable victims and brutality, and less often committed by an adult — “the bar appears to have been set lower for pursuing death against black defendants,” Professor Phillips concluded.

Professor Phillips wrote about percentages and not particular cases, but his data suggest that black defendants were overrepresented in cases involving shootings during robberies, while white defendants were more likely to have committed murders during rapes and kidnappings and to have beaten, stabbed or choked their victims.

When the nature of the crime is taken into account, Professor Phillips wrote, “the odds of a death trial are 1.75 times higher against black defendants than white defendants.” Harris County juries corrected for that disparity to an extent, so that the odds of a death sentence for black defendants after trial dropped to 1.49.

Jon Sorensen, a professor of justice studies at Prairie View A&M University in Texas, said he was suspicious of Professor Phillips’s methodology.

“It’s bizarre,” Professor Sorensen said. “It starts out with no evidence of racism. Then he controls for stuff.”

Moreover, Professor Sorensen said, Professor Phillips failed to take account of other significant factors, including the socioeconomic status of the victims.
Again, I just don't know see how you could ever draw a firm conclusion about any of this without looking at the specifics of each case. And even if any researcher had time to do that, they'd need to apply their personal opinions to weigh how bad the different crimes were. But there's no reason to trust even the most fastidious and impartial researcher to do this, since they could only make these judgments by looking at a cold, paper record of a case. A judge and jury in each case are uniquely well-positioned to make judgments about whether a defendant is guilty and how bad the defendant's specific actions were. You can never fully step into the judge's or jury's shoes.

As one example, we're told that black defendants are more likely than white defendants to kill during "robberies." That might sound like a relatively drab category of crime, in contrast with the white defendants, who are described in a way that sounds viscerally reprehensible: they "were more likely to have committed murders during rapes and kidnappings and to have beaten, stabbed or choked their victims." But one could easily imagine a robbery being quite brutal. Off-hand, I have no idea if "robberies" are generally worse than, say "kidnappings."

And remember, the severity of a crime is just one of many factors that can be relevant in sentencing. As the Houston Chronicle reported in a 2010 article about another research paper by Phillips:
District Attorney Pat Lykos, who has been in office for little more than a year, declined to comment on Phillips' conclusions about the past administration. She said under her leadership, a victim's race or ethnicity or education level would play no part in determining whether to seek the death penalty against an accused killer.

If the slain victim was single, that also would not play a role in the decision, but if the victim was married, the impact of the death on their family would be considered, Lykos said. If the victim had a criminal record and whether that was considered would depend on the facts surrounding their death, she said.

Factors that are considered in whether to seek the death penalty, Lykos said, include the victim's age and vulnerability, the number of victims killed, the brutality of the offense, whether the accused killer and victim had any prior relationship, the defendant's criminal record and life history, and the effect the crime had on society.
The 2008 article shows that the New York Times is capable of setting a high standard for itself in conveying these nuances about the limits of our ability to make a sweeping judgment about thousands of unique cases. Yesterday's op-ed shows that the Times is willing to let this complexity get simplified and filtered.

Friday, April 15, 2011

Do married men engage in less antisocial behavior because marriage tames them?

Or is there a "selection effect," i.e. men who are less antisocial (as in antisocial personality disorder, not as a synonym for "asocial"!) are more likely to get married in the first place?

Turns out the answer is: both.

Saturday, March 26, 2011

How well do you know your friends or your partner?

Don't just know your friend's or partner's qualities — "casual acquaintances" can do that. What requires being closer to them, and what will improve the friendship or relationship, is if you know what they find annoying. (via)

Tuesday, November 30, 2010

A new study that says gay people are coming out earlier than in the past . . .

. . . is wrong in a "new and interesting" way.

That blog post (1) explains how the study went wrong and (2) asks whether it could have possibly been right.

As to the second point, the blogger (Ben Goldacre) explains:

It’s a difficult analysis to design, because in each age band, there is no information on gay people who are not yet out, but may come out later, and also it’s hard to compare each age band with the others.
(The comments section on that post also has a lot of relevant insights.)

This reminds me of the oft-repeated factoid that "50% of marriages end in divorce." How could you ever determine whether this is true? You can observe divorces that have actually happened, but you can't possibly know whether existing marriages will end in divorce.

Even questions that seem to be about concrete, observable facts can't necessarily be answered by empirical research.

Thursday, November 11, 2010

Who proselytizes?

Those who believe less.

So says a paper about 3 psychological experiments called: "When in Doubt, Shout!"

Why would this be the case? Maybe there's a spectrum that explains why people hold their beliefs strongly. On one end of the spectrum, people value truth for its own sake. Their aim is to figure out what's actually true, and believe that. (I'm not saying anyone's motives are so pure; this is a theoretical extreme.)

On the other end of the spectrum, people select their beliefs, as Robin Hanson has explained, "to signal loyalty and ability." Hanson is keeping a list of signs you may be closer to that end of the spectrum. (The original list was just the first 11, but he added more based on feedback in his comments section and the comments in this post by Tyler Cowen.)

I'm not convinced by all Hanson's points. (He wasn't either; he crossed out one of them.) Here's one "sign" I disagree with:

17. You are especially eager to drop names when explaining positions and arguments.
If Hanson dislikes name-dropping, fair enough . . . but I still don't think name-dropping suggests that you're more interested in signaling loyalty than in pursuing truth for its own sake. Maybe you just like giving credit where credit is due. Or maybe you do like to show off your knowledge of specific commentators; this could be to signal, "Look, I cite Howard Zinn, so I fit in with our left-wing milieu," but it could just as well be to signal that you're not afraid to challenge the conventional wisdom of that milieu since you mention Thomas Sowell. Conversely, those who want to signal group loyalty through their beliefs might prefer not to attribute those beliefs to specific individuals but to state them as free-standing maxims — things "everyone knows" rather than one person's opinion.

But I do agree with most of Hanson's "signs," especially these:
9. You find it easy to conclude that those who disagree with you are insincere or stupid. . . .

13. You care more about consistency between your beliefs than about belief accuracy.

14. You go easy on sloppy arguments by folks on “your side.” . . .

18. You find it hard to list weak points and counter-arguments on your positions.

19. You feel passionately about a topic, but haven’t sought out much evidence.

20. You are reluctant to not have an opinion on commonly discussed topics.
A commenter on Tyler Cowen's post does the inevitable turning of the tables:
This list is an attempt to signal Robin Hanson's ability to find truth, but ends up being a signal of Robin Hanson's ability to attribute signaling to all human actions, and inability to distinguish between merely not-truth motives, and actively loyal or able motives.

Wednesday, November 3, 2010

Why do the "happiest places" have the most suicide?

A study of suicide rates across different countries and US states (PDF) concludes:

[T]he happiest places have the highest suicide rates. . . . [P]eople may find it particularly painful to be unhappy in a happy place, so that the decision to commit suicide is influenced by relative comparisons.
I found the study from this blog post, where the first comment says:
My new mission: Prevent suicides by making the world a more miserable place.

Friday, October 15, 2010

Does empirical research confirm the "Caring for Introvert" article?

In this little Atlantic article about introverts from 2003 (which was so wildly popular that the Atlantic saw fit to publish not one, not two, but three retrospectives about it), Jonathan Rauch said:

Are introverts misunderstood? Wildly. That, it appears, is our lot in life. "It is very difficult for an extrovert to understand an introvert," write the education experts Jill D. Burruss and Lisa Kaenzig. (They are also the source of the quotation in the previous paragraph.) Extroverts are easy for introverts to understand, because extroverts spend so much of their time working out who they are in voluble, and frequently inescapable, interaction with other people. They are as inscrutable as puppy dogs. But the street does not run both ways. Extroverts have little or no grasp of introversion. They assume that company, especially their own, is always welcome. They cannot imagine why someone would need to be alone; indeed, they often take umbrage at the suggestion. As often as I have tried to explain the matter to extroverts, I have never sensed that any of them really understood. They listen for a moment and then go back to barking and yipping.
This 2009 psychology study (via) said in its abstract:
We examined the differences between estimating the emotions of protagonists and evaluating those of readers in narrative comprehension. Half of the participants read stories and rated the emotional states of the protagonists, while the other half of the participants rated their own emotional states while reading the stories. The results showed that reading comprehension was facilitated when highly extraverted participants read stories about, and rated the emotional experiences of, extraverted protagonists, with personalities similar to their own. However, the same facilitative effect was not observed for less extraverted participants, nor was it observed for either type of participants under the condition in which participants rated their own emotional experiences. Thus, at least for highly extraverted participants, readers both facilitated the construction of a situation model and correctly estimated the emotional states of protagonists who were similar to themselves, perhaps due to empathy.

Sunday, October 10, 2010

Is there more or less stigma against a mental illness if people believe it's genetic?

This article looks at the question. The most striking finding: in a metastudy of 19 studies, "18 found that belief in a genetic or biological cause was associated with more negative attitudes to people with mental health problems. Just one found the opposite, that belief in a genetic or biological cause was associated with more positive attitudes."

The writer, Ben Goldacre, juxtaposes those psychological findings with this quote from a professor of neuropsychiatric genetics, responding to research that says ADHD is partly genetic:

"We hope that these findings will help overcome the stigma associated with ADHD . . . . Too often, people dismiss ADHD as being down to bad parenting or poor diet. As a clinician, it was clear to me that this was unlikely to be the case. Now we can say with confidence that ADHD is a genetic disease and that the brains of children with this condition develop differently to those of other children."
As Goldacre observes, anyone who's been campaigning against the stigmatization of mental health disorders seems have a severely mistaken assumption about people's attitudes toward mental health. If you believe someone's behavior comes from their genes, you won't necessarily be more inclined to forgive them. You might look down on them more: it's a problem with the whole person, not just a one-time decision they made. (That may be a very simplistic way to look at it. But I'm just describing how people in general might think; I'm not approving of these views.)

Tuesday, September 28, 2010

Women make less money than men on average, but how much (if any) of this is due to sexism/discrimination?

Last week Christina Hoff Sommers had a New York Times op-ed about the gap in men's and women's pay. The headline: "Women Don't Need the Paycheck Fairness Act." Sommers writes:

AMONG the top items left on the Senate’s to-do list before the November elections is a “paycheck fairness” bill, which would make it easier for women to file class-action, punitive-damages suits against employers they accuse of sex-based pay discrimination.

The bill’s passage is hardly certain, but it has received strong support from women’s rights groups, professional organizations and even President Obama, who has called it “a common-sense bill.”

But the bill isn’t as commonsensical as it might seem. It overlooks mountains of research showing that discrimination plays little role in pay disparities between men and women, and it threatens to impose onerous requirements on employers to correct gaps over which they have little control. . . .

[F]or proof, proponents point out that for every dollar men earn, women earn just 77 cents.

But that wage gap isn’t necessarily the result of discrimination. On the contrary, there are lots of other reasons men might earn more than women, including differences in education, experience and job tenure.

When these factors are taken into account the gap narrows considerably — in some studies, to the point of vanishing. A recent survey found that young, childless, single urban women earn 8 percent more than their male counterparts, mostly because more of them earn college degrees.

Moreover, a 2009 analysis of wage-gap studies commissioned by the Labor Department evaluated more than 50 peer-reviewed papers and concluded that the aggregate wage gap “may be almost entirely the result of the individual choices being made by both male and female workers.”
I agree with all of that. I don't have much of an opinion on the bill, since I haven't studied the provisions. I just want to focus on the underlying premise: that there's a significant discrimination-based gap in how much men and women are paid.

Over the weekend, the New York Times ran several letters rebutting the op-ed. If you know how these discussions tend to go, and if you're familiar with the NYT's letters section, you might be able to guess what the top letter says. Linda D. Hallman of the American Association of University Women (AAUW) writes:
The wage gap is real. Our 2007 report, “Behind the Pay Gap,” which controlled for factors flagged by Ms. Sommers, like education and experience, found that college-educated women earn less than men with comparable backgrounds.

The latest analysis Ms. Sommers cites, which shows young women outearning young men, needs to be viewed with a skeptical eye. The average American woman still earns 23 percent less than her male counterpart earns, a gap that is widest among older women and smallest among younger women.
Now, let's break down the main talking points from that letter:

1. The AAUW did a study that took into account Sommers's points, and they found that the gender gap is still "real" — women earn "less" than men.

2. "The average American woman still earns 23 percent less than her male counterpart earns."

3. The gap is "widest among older women and smallest among younger women."

Point 3 indicates that the gap is shrinking over time. It's easy to imagine that this trend would continue and eventually there'd be little or no gap, even without controlling for other factors.

How about points 1 and 2? If you read those in quick succession, you might go away with the impression that there's been a rigorous study that controlled for all the variables and still found a 23% pay gap between American male and female workers.

But that's not what Hallman says in her letter. She says the AAUW controlled for variables and found a gap . . . of unmentioned size. She says these are the same "factors flagged by Ms. Sommers" — implying that their report should allay the concerns Sommers expressed in her op-ed. Shortly after making these statements, she says there's a 23% gap.

But that 23% gap is before controlling for any variables. So that statistic is simply repeating the shortcoming that Sommers called out in her op-ed.

I wanted to see if that study Hallman links to did a better job of clarifying how much of the gap is actually due to gender itself, rather than other factors that happen to be correlated with gender. The link goes to an "Executive Summary" and a "Full Report" (which are both PDFs).

I don't see anything in the summary about controlling for variables. It simply reports the uncontrolled figures as if they're the definitive word on the "real" gap. We're supposed to see these statistics and immediately perceive sexism in how much employers pay their employees. But the gap alone doesn't demonstrate there's any sexism at play — it could result (in whole or in part) from benign factors that are correlated with gender.

So, how about the full report? I haven't read the whole thing — it's 45 pages, not counting the end materials. But they clearly found a lot of explanations for why there is such a gap, many of which they attribute to men's and women's different choices.

Here's one example of a factor, which I've taken almost at random: the report tells us that among full-time workers, men work longer hours than women (45 and 42 hours a week, respectively). This is also true among part-time workers (22 and 20 hours a week worked by men and women, respectively). Only 9% of female full-time workers work over 50 hours a week, compared with 15% of male full-time workers who work such long hours. (This is from page 15, and there's a relevant graph — only about full-time workers — on page 17.)

A little later, the AAUW gives us this conclusion, in a green, bold-faced heading:
A large portion of the gender pay gap is not explain by women's choices of characteristics.
Under that heading, the AAUW claims to support the conclusion:
If a woman and a man make the same choices, will they receive the same pay? The answer is no. The evidence shows that even when the "explanations" for the pay gap are included in a regression, they cannot fully explain the pay disparity. The regressions for earnings one year after college indicate that when all variables are included, about one-quarter of the pay gap is attributable to gender. That is, after controlling for all the factors known to affect earnings, college-educated women earn about 5 percent less than college-educated men earn. Thus, while discrimination cannot be measured directly, it is reasonable to assume that this pay gap is the product of gender discrimination.
Well, 5% is much smaller than the gap invoked in the NYT letter: 23%.

Now, you could sensibly respond: "But even a 5% difference in pay based on gender is unacceptable." Of course it would be unacceptable is women were paid 5% less than men due to their gender.

However,  even this controlling-for-variables statistic does not give us grounds to conclude that if you're a woman, you get paid 5% less than you would have if only you had been born male.

After all, how could the report have reached such a definitive conclusion? It firmly says "The answer [to whether a man and women who make the same choices will make the same money] is no," and this is proven by "the evidence." That presupposes that the AAUW in fact looked at all the relevant evidence. But the best anyone can do when they're studying such an immensely complex societal question is to control for some variables. It's an open question whether there are other relevant factors out there that the study ignored.

For instance, I said that the report looks at hours worked by full-time workers and hours worked by part-time workers. OK, that's nice. But there's some more information I'd like to know, which I don't see in the report's discussion of hours: how much more likely are men to work full-time rather than part-time? This question isn't answered by telling us how many more hours the full-time male workers work than their female counterparts. (As I said, I haven't read the whole report, so perhaps I'm wrong that the report fails to consider this factor. But you would think they'd mention it in the section about how many hours full-time and part-time workers work.) [UPDATE: Detailed discussion of this point in the comments. It's a little more complex than I thought when I was writing this post, but I still believe the report hasn't fully considered the distinction between full- and part-time workers.]

Now, what are all the variables the study failed to take into account? I don't know! And we're probably not going to find the answer to that question in the report; naturally, the report is going to talk about the things the researchers did study rather than talk about the factors they failed to study.

Thomas Sowell explains in his book Economic Facts and Fallacies (page 61):
Ideally, we would like to be able to compare those women and men who are truly comparable in education, skills, experience, continuity of employment, and full-time or part-time work, among other variables, and then determine whether employes hire, pay, and promote women the same as they do comparable men. At the very least, we might then see in whatever differences in hiring, pay and promotions might exist a measure of how much employer discrimination exists. Given the absence or imperfections of data on some of these variables, the most we can reasonably expect is some measure of whatever residual economic differences between women and men remain after taking into account those variables which can be measured with some degree of accuracy and reliability. That residual would then give us the upper limit of the combined effect of employer discrimination plus whatever unspecified or unmeasured variables might also exist.
In other words, the size of the gap attributable to gender discrimination might be 5%. Or it might be less than that. It might be 2% or 1%. It might be zero. It might even favor women. We don't know the answer.

If you're interested enough in this question to have read this far, I highly recommend buying Economic Facts and Fallacies and reading the chapter called "Male-Female Facts and Fallacies," where Sowell brilliantly explains many of the factors that account for the gender gap.

IN THE COMMENTS: LemmusLemmus draws an insightful analogy:
Sowell is right, of course. Just ascribing residual variance to your favourite factor, such as sexism, is the statistical equivalent of the God of the Gaps argument. [link added]

Tuesday, January 27, 2009

New findings on coffee and kids

Two studies, both reported in the New York Times and both going against the grain of conventional wisdom:

1. Coffee - Drinking more of it makes it less likely you'll suffer from dementia, including Alzheimer's.

Those who drank 3 to 5 "cups" of coffee a day were 65 percent less likely to have dementia than those who drank 0 to 2.

The researchers initially studied 2,000 men and women; then they looked for those same people 21 years later, with a 70% success rate. Apparently, those 70% were the basis for the ultimate findings.

They caution, "We have no evidence that for people who are not drinking coffee, taking up drinking will have a protective effect." But the study "controll[ed] for numerous socioeconomic and health factors."

The article also mentions previous studies that have found a connection between drinking coffee and reduced incidence of Parkinson's.


2. Kids - This delicately worded article suggests that the "empty nest syndrome" is a myth:

[D]espite the common worry that long-married couples will find themselves with nothing in common, the new research, published in November in the journal Psychological Science, shows that marital satisfaction actually improves when the children finally take their exits.

"It's not like their lives were miserable," said Sara Melissa Gorchoff, a specialist in adult relationships at the University of California, Berkeley. "Parents were happy with their kids. It’s just that their marriages got better when they left home."

While that may not be surprising to many parents, understanding why empty nesters have better relationships can offer important lessons on marital happiness for parents who are still years away from having a child-free house.

Indeed, one of the more uncomfortable findings of the scientific study of marriage is the negative effect children can have on previously happy relationships. Despite the popular notion that children bring couples closer, several studies have shown that marital satisfaction and happiness typically plummet with the arrival of the first baby.

In June, The Journal of Advanced Nursing reported on a study from the University of Nebraska College of Nursing that looked at marital happiness in 185 men and women. Scores declined starting in pregnancy, and remained lower as the children reached 5 months and 24 months. Other studies show that couples with two children score even lower than couples with one child. ...

"Kids aren't ruining parents’ lives," Dr. Gorchoff said. “It’s just that they’re making it more difficult to have enjoyable interactions together."

RELATED: Would having children make me happier?

IN THE COMMENTS: Jeff says:
With respect to the "Empty Nest Syndrome" study, all I can say at this point is that I'm REALLY looking forward to testing its result.

Wednesday, August 6, 2008

Would having children make me happier?

I don't know the answer to that, of course. But it'd be pretty useful to know!

So this is a welcome finding -- not because I necessarily agree with the conclusion, but because the question is so emotionally charged that it's refreshing to see someone even attempt to answer it objectively:

"Parents experience lower levels of emotional well-being, less frequent positive emotions and more frequent negative emotions than their childless peers," says Florida State University's Robin Simon, a sociology professor who's conducted several recent parenting studies, the most thorough of which came out in 2005 and looked at data gathered from 13,000 Americans by the National Survey of Families and Households. "In fact, no group of parents—married, single, step or even empty nest—reported significantly greater emotional well-being than people who never had children. It's such a counterintuitive finding because we have these cultural beliefs that children are the key to happiness and a healthy life, and they're not."
Responding to Will Wilkinson's blog post on that study, Megan McArdle says: "I don't understand why Will Wilkinson finds this" -- that is, the study's findings -- "so surprising."

I don't understand why McArdle finds Wilkinson so surprised! I don't see the slightest expression of surprise in Wilkinson's post.

On the contrary, it seems like he has a pretty unflinchingly realistic take on the whole thing:
[T]he profundity of the experience of loving a child I think blinds many people to the very real costs of raising them. To accept that we have been made less happy in a real sense by our children threatens our sense of the profundity and the value of that bond. So people get upset when they hear this. But that’s not counter-evidence.
Those last two sentences are ones I had to re-read a few times to make sure I absorbed them. This is a key point that's often overlooked: your visceral aversion to an idea doesn't mean the idea is wrong.