Showing posts with label experts. Show all posts
Showing posts with label experts. Show all posts

Sunday, December 23, 2012

Growing class inequality in education

In a New York Times article with the headline "For Poor, Leap to College Often Ends in a Hard Fall," Jason DeParle writes:

Angelica Gonzales marched through high school in Goth armor — black boots, chains and cargo pants — but undermined her pose of alienation with a place on the honor roll. She nicknamed herself after a metal band and vowed to become the first in her family to earn a college degree.

Weekends and summers were devoted to a college-readiness program, where her best friends, Melissa O’Neal and Bianca Gonzalez, shared her drive to “get off the island” — escape the prospect of dead-end lives in luckless Galveston. Melissa, an eighth-grade valedictorian, seethed over her mother’s boyfriends and drinking, and Bianca’s bubbly innocence hid the trauma of her father’s death. They stuck together so much that a tutor called them the “triplets.”

Low-income strivers face uphill climbs, especially at Ball High School, where a third of the girls’ class failed to graduate on schedule. But by the time the triplets donned mortarboards in the class of 2008, their story seemed to validate the promise of education as the great equalizer.

Angelica, a daughter of a struggling Mexican immigrant, was headed to Emory University. Bianca enrolled in community college, and Melissa left for Texas State University, President Lyndon B. Johnson’s alma mater. . . .

Four years later, . . . [n]ot one of them has a four-year degree. Only one is still studying full time, and two have crushing debts. Angelica, who left Emory owing more than $60,000, is a clerk in a Galveston furniture store.

Each showed the ability to do college work, even excel at it. But the need to earn money brought one set of strains, campus alienation brought others, and ties to boyfriends not in school added complications. With little guidance from family or school officials, college became a leap that they braved without a safety net.

The story of their lost footing is also the story of something larger — the growing role that education plays in preserving class divisions. Poor students have long trailed affluent peers in school performance, but from grade-school tests to college completion, the gaps are growing. With school success and earning prospects ever more entwined, the consequences carry far: education, a force meant to erode class barriers, appears to be fortifying them.

“Everyone wants to think of education as an equalizer — the place where upward mobility gets started,” said Greg J. Duncan, an economist at the University of California, Irvine. “But on virtually every measure we have, the gaps between high- and low-income kids are widening. It’s very disheartening.”

The growing role of class in academic success has taken experts by surprise since it follows decades of equal opportunity efforts and counters racial trends, where differences have narrowed.
If the "experts" are so surprised, perhaps they should rethink their assumptions.

Tuesday, September 28, 2010

Women make less money than men on average, but how much (if any) of this is due to sexism/discrimination?

Last week Christina Hoff Sommers had a New York Times op-ed about the gap in men's and women's pay. The headline: "Women Don't Need the Paycheck Fairness Act." Sommers writes:

AMONG the top items left on the Senate’s to-do list before the November elections is a “paycheck fairness” bill, which would make it easier for women to file class-action, punitive-damages suits against employers they accuse of sex-based pay discrimination.

The bill’s passage is hardly certain, but it has received strong support from women’s rights groups, professional organizations and even President Obama, who has called it “a common-sense bill.”

But the bill isn’t as commonsensical as it might seem. It overlooks mountains of research showing that discrimination plays little role in pay disparities between men and women, and it threatens to impose onerous requirements on employers to correct gaps over which they have little control. . . .

[F]or proof, proponents point out that for every dollar men earn, women earn just 77 cents.

But that wage gap isn’t necessarily the result of discrimination. On the contrary, there are lots of other reasons men might earn more than women, including differences in education, experience and job tenure.

When these factors are taken into account the gap narrows considerably — in some studies, to the point of vanishing. A recent survey found that young, childless, single urban women earn 8 percent more than their male counterparts, mostly because more of them earn college degrees.

Moreover, a 2009 analysis of wage-gap studies commissioned by the Labor Department evaluated more than 50 peer-reviewed papers and concluded that the aggregate wage gap “may be almost entirely the result of the individual choices being made by both male and female workers.”
I agree with all of that. I don't have much of an opinion on the bill, since I haven't studied the provisions. I just want to focus on the underlying premise: that there's a significant discrimination-based gap in how much men and women are paid.

Over the weekend, the New York Times ran several letters rebutting the op-ed. If you know how these discussions tend to go, and if you're familiar with the NYT's letters section, you might be able to guess what the top letter says. Linda D. Hallman of the American Association of University Women (AAUW) writes:
The wage gap is real. Our 2007 report, “Behind the Pay Gap,” which controlled for factors flagged by Ms. Sommers, like education and experience, found that college-educated women earn less than men with comparable backgrounds.

The latest analysis Ms. Sommers cites, which shows young women outearning young men, needs to be viewed with a skeptical eye. The average American woman still earns 23 percent less than her male counterpart earns, a gap that is widest among older women and smallest among younger women.
Now, let's break down the main talking points from that letter:

1. The AAUW did a study that took into account Sommers's points, and they found that the gender gap is still "real" — women earn "less" than men.

2. "The average American woman still earns 23 percent less than her male counterpart earns."

3. The gap is "widest among older women and smallest among younger women."

Point 3 indicates that the gap is shrinking over time. It's easy to imagine that this trend would continue and eventually there'd be little or no gap, even without controlling for other factors.

How about points 1 and 2? If you read those in quick succession, you might go away with the impression that there's been a rigorous study that controlled for all the variables and still found a 23% pay gap between American male and female workers.

But that's not what Hallman says in her letter. She says the AAUW controlled for variables and found a gap . . . of unmentioned size. She says these are the same "factors flagged by Ms. Sommers" — implying that their report should allay the concerns Sommers expressed in her op-ed. Shortly after making these statements, she says there's a 23% gap.

But that 23% gap is before controlling for any variables. So that statistic is simply repeating the shortcoming that Sommers called out in her op-ed.

I wanted to see if that study Hallman links to did a better job of clarifying how much of the gap is actually due to gender itself, rather than other factors that happen to be correlated with gender. The link goes to an "Executive Summary" and a "Full Report" (which are both PDFs).

I don't see anything in the summary about controlling for variables. It simply reports the uncontrolled figures as if they're the definitive word on the "real" gap. We're supposed to see these statistics and immediately perceive sexism in how much employers pay their employees. But the gap alone doesn't demonstrate there's any sexism at play — it could result (in whole or in part) from benign factors that are correlated with gender.

So, how about the full report? I haven't read the whole thing — it's 45 pages, not counting the end materials. But they clearly found a lot of explanations for why there is such a gap, many of which they attribute to men's and women's different choices.

Here's one example of a factor, which I've taken almost at random: the report tells us that among full-time workers, men work longer hours than women (45 and 42 hours a week, respectively). This is also true among part-time workers (22 and 20 hours a week worked by men and women, respectively). Only 9% of female full-time workers work over 50 hours a week, compared with 15% of male full-time workers who work such long hours. (This is from page 15, and there's a relevant graph — only about full-time workers — on page 17.)

A little later, the AAUW gives us this conclusion, in a green, bold-faced heading:
A large portion of the gender pay gap is not explain by women's choices of characteristics.
Under that heading, the AAUW claims to support the conclusion:
If a woman and a man make the same choices, will they receive the same pay? The answer is no. The evidence shows that even when the "explanations" for the pay gap are included in a regression, they cannot fully explain the pay disparity. The regressions for earnings one year after college indicate that when all variables are included, about one-quarter of the pay gap is attributable to gender. That is, after controlling for all the factors known to affect earnings, college-educated women earn about 5 percent less than college-educated men earn. Thus, while discrimination cannot be measured directly, it is reasonable to assume that this pay gap is the product of gender discrimination.
Well, 5% is much smaller than the gap invoked in the NYT letter: 23%.

Now, you could sensibly respond: "But even a 5% difference in pay based on gender is unacceptable." Of course it would be unacceptable is women were paid 5% less than men due to their gender.

However,  even this controlling-for-variables statistic does not give us grounds to conclude that if you're a woman, you get paid 5% less than you would have if only you had been born male.

After all, how could the report have reached such a definitive conclusion? It firmly says "The answer [to whether a man and women who make the same choices will make the same money] is no," and this is proven by "the evidence." That presupposes that the AAUW in fact looked at all the relevant evidence. But the best anyone can do when they're studying such an immensely complex societal question is to control for some variables. It's an open question whether there are other relevant factors out there that the study ignored.

For instance, I said that the report looks at hours worked by full-time workers and hours worked by part-time workers. OK, that's nice. But there's some more information I'd like to know, which I don't see in the report's discussion of hours: how much more likely are men to work full-time rather than part-time? This question isn't answered by telling us how many more hours the full-time male workers work than their female counterparts. (As I said, I haven't read the whole report, so perhaps I'm wrong that the report fails to consider this factor. But you would think they'd mention it in the section about how many hours full-time and part-time workers work.) [UPDATE: Detailed discussion of this point in the comments. It's a little more complex than I thought when I was writing this post, but I still believe the report hasn't fully considered the distinction between full- and part-time workers.]

Now, what are all the variables the study failed to take into account? I don't know! And we're probably not going to find the answer to that question in the report; naturally, the report is going to talk about the things the researchers did study rather than talk about the factors they failed to study.

Thomas Sowell explains in his book Economic Facts and Fallacies (page 61):
Ideally, we would like to be able to compare those women and men who are truly comparable in education, skills, experience, continuity of employment, and full-time or part-time work, among other variables, and then determine whether employes hire, pay, and promote women the same as they do comparable men. At the very least, we might then see in whatever differences in hiring, pay and promotions might exist a measure of how much employer discrimination exists. Given the absence or imperfections of data on some of these variables, the most we can reasonably expect is some measure of whatever residual economic differences between women and men remain after taking into account those variables which can be measured with some degree of accuracy and reliability. That residual would then give us the upper limit of the combined effect of employer discrimination plus whatever unspecified or unmeasured variables might also exist.
In other words, the size of the gap attributable to gender discrimination might be 5%. Or it might be less than that. It might be 2% or 1%. It might be zero. It might even favor women. We don't know the answer.

If you're interested enough in this question to have read this far, I highly recommend buying Economic Facts and Fallacies and reading the chapter called "Male-Female Facts and Fallacies," where Sowell brilliantly explains many of the factors that account for the gender gap.

IN THE COMMENTS: LemmusLemmus draws an insightful analogy:
Sowell is right, of course. Just ascribing residual variance to your favourite factor, such as sexism, is the statistical equivalent of the God of the Gaps argument. [link added]

Wednesday, September 15, 2010

Scientists keep getting things wrong. Should we stop believing in science?

"Plenty of today’s scientific theories will one day be discredited. So should we be sceptical of science itself?" That's the teaser for a short, worth-reading article (via Arts & Letters Daily). Here's an excerpt:

Physicists, in particular, have long believed themselves to be on the verge of explaining almost everything. In 1894 Albert Michelson, the first American to get a Nobel prize in science, said that all the main laws and facts of physics had already been discovered. In 1928 Max Born, another Nobel prize-winner, said that physics would be completed in about six months’ time. In 1988, in his bestselling “A Brief History of Time”, the cosmologist Stephen Hawking wrote that “we may now be near the end of the search for the ultimate laws of nature.” Now, in the newly published “The Grand Design”, Hawking paints a picture of the universe that is “different…from the picture we might have painted just a decade or two ago”. In the long run, physicists are, no doubt, getting closer and closer to the truth. But you can never be sure when the long run has arrived. And in the short run—to adapt Keynes’s proverb—we are often all wrong.

Most laymen probably assume that the 350-year-old institution of “peer review”, which acts as a gatekeeper to publication in scientific journals, involves some attempt to check the articles that see the light of day. In fact they are rarely checked for accuracy, and, as a study for the Fraser Institute, a Canadian think-tank, reported last year, “the data and computational methods are so seldom disclosed that post-publication verification is equally rare.” Journals will usually consider only articles that present positive and striking results, and scientists need constantly to publish in order to keep their careers alive. . . . Historians of science call this bias the “file-drawer problem”: if a set of experiments produces a result contrary to what the team needs to find, it ends up filed away, and the world never finds out about it.
Despite all this and more, the author concludes the article by saying we should be generally credulous of science. Isn't this an outrageous paradox?

He thinks there's no other choice, since not to believe in science would be not to believe in anything, which would be paralyzing. "[S]cience is the only game in town."

Well, not really. If you're a professional scientist, it's certainly not that simple. You don't have a binary choice between "believing" or "not believing" in "science." You should be aware of enough of the complexities of your field that you can be more or less skeptical of different claims, using some kind of epistemic sliding scale.

And if you're a layperson, you usually don't have to believe in scientific theories at all in order to lead a productive life (as long as you're familiar with enough of the basics not to be embarrassed if they come up in conversation). But what if you're facing a specific problem and your best hope of a solution depends on science, such as taking medication? Well, you still don't need to be completely credulous about science in general or even the scientific claims behind that medication. You can take a gamble that the scientists are more likely than not to be right. This just means you think the odds that the medication is effective are greater than the odds of any other method you know of (including doing nothing); it doesn't mean you believe the odds that it's effective are 100%. You can use a working assumption that scientists are getting things right, based on your hunch that this usually turns out to be true -- but you can, at the same time, be skeptical of your own hunch. This needn't lead to paralysis. On the contrary, this semi-skeptical attitude can make it easier to move on once we discover, as we're sure to from time to time, that we were believing in bad science.

Thursday, February 25, 2010

Should children be medicated to deal with their psychological issues?

In this book review that's probably a more worthwhile read than the book, Alison Gopnik writes:

Within the past few years more and more children have been given powerful brain-altering drugs to deal with a wide range of problems. . . .

You can sympathize with the impulse of parents to do something, anything at all, to help their children. But that doesn't alter the fact that the scientific evidence just isn't clear about what to do. On balance, though, the evidence suggests that we should be conservative about prescribing drugs to children, and much more conservative than we actually are. Even the scientists who advocate some use of drugs acknowledge that they are overprescribed and badly managed. Brains are complex enough, children's developing brains are even more complex, and determining the long-term effects of drugs that alter those brains is especially difficult. Children are different from adults, often in radical ways, and many childhood problems resolve just as part of development.

On top of that, each generation of doctors discovers that the last generation was disastrously misguided in its medical interventions, from lobotomies to estrogen replacement, at the same time that they assure the patients that this time is different.
I'm also glad to see that the review highlights the importance of compatible "levels of description":
[Judith] Warner's book [We've Got Issues] also reflects a common confusion in popular writing about psychology. She writes as if there are just two kinds of explanations for human behavior. Either the everyday narratives are right—so that children are unhappy because their parents don't care about them, or they fail at school because they are lazy. Or else the right answer is that the children's problems are the result of "something in their brains." Warner's logic seems to be that since the parents do care about their kids, the problem must be in the children's brains and therefore drugs will fix it.

But everything about human beings, cultural or individual, innate or learned, is in our brains. Loss and humiliation change our serotonin levels, education transforms our brain connections, social support affects our cortisol. Neurological and psychological and social processes are inextricable. The work of psychological science is to identify causes at many levels of description—social, cultural, individual, and neurological.
There was a very insightful blog post (on Psychology Today's website) that makes a similar point about evolutionary psychology: "Is it evolutionary, or is it . . . ?" (That post speaks of "levels of causation," which is the same thing as the "levels of description" in the above block quote.) I've also blogged this concept before: "Can you give a neurological or evolutionary explanation of love without debunking the whole idea of love?"

By the way, that book review has several points that could be added to my list of ways blogs are better than books. The bias in favor of conclusions that come from riveting stories is huge.

Monday, November 30, 2009

How to make economics confusing enough to get published

In an article called "Confessions of an Economist: Writing to Impress Rather than to Inform" (PDF), economics professor David R. Hakes tells this story about academia's perverse bias in favor of inscrutability (via):

A colleague presented a fairly complex paper on how firms might use warranties to extract rent from certain users of their products. No one in the audience seemed to follow the argument. Because I found the argument to be perfectly clear, I repeatedly defended the author and I was able to bring the audience to an understanding of the paper. The author was so pleased that I was able to understand his work and explain it to others that he asked me if I was willing to coauthor the paper with him. I said I would be delighted.

We managed to reduce the equations in the paper to six. At this stage the paper was perfectly clear and was written at a level so that it could reach a broad audience. When we submitted the paper to risk, uncertainty, and insurance journals, the referees responded that the results were self-evident. After some degree of frustration, my coauthor suggested that the problem with the paper might be that we had made the argument too easy to follow, and thus referees and editors were not sufficiently impressed. He said that he could make the paper more impressive by generalizing the model. While making the same point as the original paper, the new paper would be more mathematically elegant, and it would become absolutely impenetrable to most readers. The resulting paper had fifteen equations, two propositions and proofs, dozens of additional mathematical expressions, and a mathematical appendix containing nineteen equations and even more mathematical expressions. I personally could no longer understand the paper and I could not possibly present the paper alone.

The paper was published in the first journal to which we submitted. . . . While the audience for the original version of the paper was broad, the audience for the published version of the paper has been reduced to a very narrow set of specialists and mathematicians. Even for mathematicians, . . . the time and effort necessary to read the paper may exceed the benefits received from reading it. I am now part of the conspiracy to intentionally make simple ideas obscure and complex.
Alas, although he says in the article's conclusion that he'll try to "write to inform rather than to impress," he admits he'll still occasionally succumb to the professional norm of obfuscation:
If in the future a referee or an editor suggests that I "generalize the model" or "make the model dynamic" when I feel that the change is an unnecessary complication which will likely cloud the issue rather than illuminate it, I will probably do as they requested rather than fight for clarity.

Monday, November 23, 2009

Scientific happiness studies are missing the point.

"The fundamental error of the science - and the reason why so many of its recommendations sound trivial or just confused - is the assumption that happiness is the same as positive emotion. Researchers are continuously drawn back to this idea since it makes happiness measurable."

So says Mark Vernon (who also writes the excellent "Philosophy and Life Blog"), channeling Robert Schoch's book The Secrets of Happiness: Three Thousand Years of Searching for the Good Life.

The whole article is well worth reading and worth keeping in mind the next time someone tries to tell you that researchers have discovered that people who do such-and-such are "happier" than people who do so-and-so.

Wednesday, July 22, 2009

"Who do you trust?"

Encylcopedia BritannicaGood question!

Here are some of Joel Achenbach's answers:

Google and Wikipedia are pretty good, but they're just starting points in the quest to find out what's going on....

I trust nature, resilient and resourceful as it has shown itself to be for some 4 billion years in these parts.

I trust the scientific method, for being so relentlessly self-correcting, and for having the courage to view truth as provisional.

I trust the future. Could be foolish. But trust always has an element of faith. I trust the future to give us a better world. (Cross my fingers.)
Achenbach picked up the "Who do you trust?" meme from this nicely short Washington Post piece, in which Philip Kennicott asks the same question in the wake of the death of Walter Cronkite. Kennicott writes:
It was deeply disturbing, but not terribly surprising, to learn that under the guidance of a stern man in a lab coat, ordinary people would torture innocent victims in the infamous Milgram experiment carried out at Yale University. Cronkite was a white man in a tie, with a calm, reassuring voice, and he could have talked us into almost anything, if he wanted to. But his legacy is a paradox: We trusted him to teach us to trust less.

The nostalgia for Cronkite is nostalgia not for a lost golden age, but for a brief time when three large media corporations held a monopoly on the airwaves, when trust could be sorted out easily and quickly with the shorthand of race, class and education....

But there is no aura of trust that public figures can put on like a bespoke suit. Trust has been shattered into a million little pieces, which was, perversely, the name of a dubious memoir endorsed by Oprah, unofficially the most trusted woman in America. Replacing it is a host of smaller and more precise ideas. Transparency. Authenticity. Accuracy. A different world, and not necessarily a worse one.
(Photo of Encyclopedia Brittanica by Stewart Butterfield.)

Monday, July 20, 2009

Crime is dropping in cities across the United States; experts baffled

The Washington Post reports:

Violent crime has plummeted in the Washington area and in major cities across the country, a trend criminologists describe as baffling and unexpected.

[Washington, DC], New York and Los Angeles are on track for fewer killings this year than in any other year in at least four decades. Boston, San Francisco, Minneapolis and other cities are also seeing notable reductions in homicides.

"Experts did not see this coming at all," said Andrew Karmen, a criminologist and professor of sociology at the John Jay College of Criminal Justice in New York.
Maybe the criminologists are systematically ruling out certain possible explanations.

Maybe if the facts are so baffling to them, they should consider changing their theories.

I keep reading about how the United States has too many people incarcerated. Isn't it possible that our policy hasn't been totally irrational, but is actually working?

Another post for my "experts" tag.

Wednesday, June 10, 2009

Why psychology isn't a science

Writing in Psychology Today, Norman N. Holland explains:

The problem comes from the very effort to be scientific. . . . [P]sychological experiments tend to get more and more specific. Experimenters will use exactly defined methods and procedures. They will use highly specific statistical tests appropriate to the experiment at hand. They may select subjects with very special characteristics. All this is, of course, quite appropriate in a discipline seeking to be scientific. But the end result is a teeny, tiny conclusion that cannot be added to other experiments with differently specific subjects, different statistical tests, different methods and procedures. No cumulation. No science....

When I ask my psychology students, What major conclusions about the human mind can you draw from contemporary psychological research?, I draw a blank....

Scientific psychology becomes unscientific because it is dealing with mind, and mind does not lend itself to experimental precision.
Sounds about right. I took a few psychology courses in college, and I was struck by how free the instructors were in stating their random opinions about life as if they were scientific facts. One professor told our class that each one of us in the room could potentially be a victim of a violently abusive relationship, and stay in it on a long-term basis. How could anyone possibly know that?

Friday, April 24, 2009

"This LP is basically not very good."

This review of Chris Cornell's new album, Scream, is actually more useful than the average review by a professional music critic. (Via ChordStrike.)

In that spirit, I wrote a review of St. Vincent's yet-to-be-released album Actor (which I was anticipating last Music Friday). Here it is, in full, originally appearing as a status update on Facebook:

John Althouse Cohen thinks the new St. Vincent album is pretty good but not as good as Marry Me. Better-produced, though.
You might read reviews of the album in Pitchfork or Rolling Stone, where the critics are trying to (1) satisfy people's word-count expectations and (2) show off their (a) facility with metaphor, (b) aptitude for lyrical analysis, and (c) knowledge of musical influences (probably mentioning Kate Bush and Joni Mitchell), but my Facebook status update tells you everything you need to know.

If you're a fan of St. Vincent based on Marry Me, it's worth getting Actor, but expect a bit of "sophomore slump." If you don't have either of her albums, get Marry Me first, and only get Actor if you really like Marry Me. Any ink spilled, or bandwidth hogged, dissecting this album in any further detail will be superfluous.

Saturday, April 11, 2009

"Ideology trumps evidence."

Even for doctors?

No -- that couldn't be! I thought they were the smart, good, trustworthy people.

Thursday, April 9, 2009

How Judges Think by Richard Posner

I'm reading How Judges Think, by the eminent judge, professor, and blogger Richard Posner. I highly recommend it to anyone who's interested in understanding what really drives judges' rulings.

Here are a few tidbits, all from the introduction:

  • Ivan Karamazov said that if God does not exist everything is permitted, and traditional legal thinkers are likely to say that if legalism (legal formalism, orthodox legal reasoning, a "government of laws not men," the "rule of law" ... and so forth) does not exist everything is permitted to judges -- so watch out!
  • [M]ost judges are cagey, even coy, in discussing what they do. They tend to parrot an official line about the judicial process (how rule-bound it is), and often to believe it, though it does not describe their actual practices.
  • The secrecy of judicial deliberations is an example of professional mystification. Professions such as law and medicine provide essential services that are difficult for outsiders to understand and evaluate. Professionals like it that way because it helps them maintain a privileged status. But they know they have to overcome the laity's mistrust, and they do this in part by developing a mystique that exaggerates not only the professional's skill but also his disinterest.
More to come...

Wednesday, April 1, 2009

Hope-based administration

In the midst of an overwritten, over-metaphored piece called "Is Obama skidding or crashing," Penn Jillette sums up exactly how I'm feeling about the Obama administration right now, except for the part about being twice Obama's weight:

President Obama is so damn smart. He just drips smart. He clearly understands stuff that we could never understand. He's trustworthy. ... If I weren't twice his weight, I'd fall back with my eyes closed into his caring arms in one of those cheesy '70s church trust exercises. He could talk me into anything.

Obama tells us that we can spend our way out of debt. He tells us that even though the government had control over the banks and did nothing to stop the bad that's going on, if we give them more control over more other bank-like things, then they can make sure bad stuff doesn't happen ever again. He says we can get out of all those big wars President Bush caused by sending more troops into Afghanistan. And I don't know. I really don't know.

RELATED: This blog post by my mom from right after the financial crisis exploded:
Democrazy.

A typo I just made while trying to IM the line "this shows we don't really have a democracy." The topic was how impossible it is for almost anyone to understand the current financial crisis, how disembodied it is from the presidential candidates we've been so focused on, and how we are forced by the complexity of the system to rely on experts whose reliability we cannot judge.

That was about an IM conversation she and I had in September 2008. I don't think the situation has gotten significantly better since then.

I feel like giving up on reading the news, then checking back in a year or two to see how things went. In the meantime, trying to figure out what's going on seems hopeless.

Wednesday, February 25, 2009

Keeping an open mind on the mind-body problem, part 3

In my previous 2 posts on the mind-body problem (post 1, post 2), I criticized materialist philosophers -- that is, those who believe only the physical exists and thus deny the existence of any kind of mind distinct from one's physical body. As I said (quoting Thomas Nagel), one huge problem with this view is that "all materialist theories deny the reality of the mind," though they're usually not explicit about this point, possibly because very few normal people would accept their conclusion if stated plainly.

Here's Thomas Nagel's view, which I agree with:

To insist on trying to explain the mind in terms of concepts and theories that have been devised exclusively to explain nonmental phenomena is, in view of the radically distinguishing characteristics of the mental, both intellectually backward and scientifically suicidal.
Well, so far all of this has focused on the flaws with materialism. But is this just a negative point, or is there some positive, viable alternative?

I think so, but it requires accepting the fact that we probably don't have a satisfying theory yet. There's That's no reason to assume we'll never have such a theory. [UPDATE: I changed it from "There's" to "That's" because I realized I didn't want to make such a firm statement. Colin McGinn argues that, indeed, we'll never have a good theory.]

Here's Nagel's extended argument to this effect (this is all from chapter 2 of The View from Nowhere (1986), which is one of the best philosophy books I've ever read):

1. "The shift from the universe of Newton to the universe of Maxwell required the development of a whole new set of concepts and theories.... This was not merely the complex application, as in molecular biology, of fundamental principles already known independently. Molecular biology does not depend on new ultimate principles or concepts of physics or chemistry, like the concept of field. Electrodynamics did."

2. Even if these new, disparate concepts have been "superseded by a deeper unity,"* we wouldn't have been able to discover that "deeper unity" in the first place "if everyone had insisted that it must be possible to account for any physical phenomenon by using concepts that are adequate to explain the behavior of planets, billiard balls, gases, and liquids. An insistence on identifying the real with the mechanical would have been a hopeless obstacle to progress, since mechanics is only one form of understanding, appropriate to a certain limited though pervasive subject matter."

* Nagel suggests that this has actually happened; I don't know enough about the relevant science to have an opinion on that.

3. "The difference between mental and physical is far greater than the difference between electrical and mechanical."

4. If you believe that something can be "pervasive" but "limited," to use the words from point 2 -- and it's hard to see how anyone could deny this possibility -- then you should be open to the view that the physical isn't necessarily the only thing that's real, but rather is "only one form of understanding."

5. Given that it certainly seems like the world includes not just the physical but also the mental, "[w]e need entirely new intellectual tools, and it is precisely by reflection on what appears impossible -- like the generation of mind out of the recombination of matter -- that we will be forced to create such tools."

6. It's possible that if we go down this road and come up with a successful theory of the mind, we will not arrive at dualism, but will discover some sort of "deeper unity" of the mind and body. Nagel elaborates on this point:
In other words, if a psychological Maxwell devises a general theory of mind, he may make it possible for a psychological Einstein to follow with a theory that the mental and the physical are really the same. But this could happen only at the end of a process which began with the recognition that the mental is something completely different from the physical world as we have come to know it through a certain highly successful form of detached objective understanding. Only if the uniqueness of the mental is recognized will concepts and theories be devised especially for the purpose of understanding it. Otherwise there is a danger of futile reliance on concepts designed for other purposes, and indefinite postponement of any possibility of a unified understanding of mind and body."
I completely agree with Nagel on all this, and I try to keep it in mind anytime I read or hear overly confident materialist philosophers.

Wednesday, February 18, 2009

Keeping an open mind on the mind-body problem, part 2

As I discussed in yesterday's post, most philosophers reject dualistic theories of the mind. If you're a professional philosopher, you're supposed to scoff at the word "dualism," point out that Descartes naively believed in dualism, and explain that we now understand how foolish he was. So foolish it's not even worth arguing about.

This is one of the many biases of philosophy that makes it an unreliable source of truth. Being a dualist philosopher in this day and age is like being a politician who's a pro-choice Republican or a pro-life Democrat: you might have smart things to say that would enrich the debate, but you're going to be inhibited from saying them because that's just not what people in your position are supposed to do.

Another bias of academic philosophy is that if you can describe someone else's view as "mysterious" (or even "spooky"), that's considered a devastating critique. In contrast, you support your own theory by saying that if it's true, it explains a lot about the world. But the problem is that there's a lot about the world that is mysterious. And some theories that seem to "explain" a lot are actually just sweeping a bunch of complexity and mystery under the rug.

I wish instead of using "mysterious" as an insult, professional philosophers would see it as a potentially positive quality: "Hey, your theory accurately recognizes how mysterious and unsolved this phenomenon is." Of course, this would shed light on how limited philosophy's accomplishments are, so it's unsurprising that people who depend on philosophy to make a living avoid talking this way.


IN THE COMMENTS: A possible solution to the mystery of why philosophers use "mysterious" as an insult:
Isn't "mysterious" a code word for religion?
Philosophers will also use "mystical" with the same meaning, which makes the connection blatant.
UPDATE: Continued here.

Tuesday, February 17, 2009

Keeping an open mind on the mind-body problem, part 1

I've been thinking about the mind-body problem. One oddity about the problem is that, as Descartes famously recognized, the very act of "thinking" about it provides you with evidence that ties directly into the problem -- namely evidence of the fact that you have mental states.

But Thomas Nagel says (in his essay "Why We Are Not Computers," from Other Minds):

The power of Descartes's intuitive argument is considerable, but dualism of either kind [substance dualism or property dualism] is now a rare view among philosophers, most of whom accept some kind of materialism. They believe that everything there is and everything that happens in the world must be capable of description by physical science.
That last sentence is deeply disturbing to me. There's an obvious problem and a less obvious problem with the assumption that the mind-body problem can be solved purely through physical science.

The obvious problem is: why should we assume we can know everything?

When I was a little kid, I would tell people, "I know everything, and you know neverything." Clearly I had an instinctive desire to "know everything," and I'm sure the feeling is common. But as I say, I was a kid. You're supposed to outgrow that. I don't see the point in doing philosophy if you don't acknowledge there might be things you just can't know about the world. Maybe most philosophers do assume science can explain everything, but if so, then most philosophers are being childish.

The less obvious problem is (again quoting Nagel from the same book):
all materialist theories deny the reality of the mind, but most of them disguise the fact (from themselves as well as from others) by identifying the mind with something else.

UPDATE: Continued here.

Friday, February 13, 2009

Bertrand Russell's thoughts on professionals, boredom, and breaking convention

I've blogged Bertrand Russell's book The Conquest of Happiness (1930) twice before:

1. Two kinds of careers.

2. The strawberry theory of good taste.


Here's some more:

3. Judging professionals — "[N]o outsider can tell whether a doctor really knows much medicine, or whether a lawyer really knows much law, and it is therefore easier to judge of [sic] their merit by the income to be inferred from their standard of life." (43-44)

4. Modern boredom — "[T]he machine age has enormously diminished the sum of boredom in the world. . . . We are less bored than our ancestors were, but we are more afraid of boredom. We have come to know, or rather to believe, that boredom is not part of the natural lot of man, but can be avoided by a sufficiently vigorous pursuit of excitement." (49-50)

5. Where to break convention — "Conventional people are roused to fury by departures from convention, largely because they regard such departures as a criticism of themselves. They will pardon much unconventionality in a man who has enough jollity and friendliness to make it clear . . . that he is not engaged in criticizing them. This method of escaping censure is, however, impossible to many of those whose tastes or opinions cause them to be out of sympathy with the herd. Their lack of sympathy makes them uncomfortable and causes them to have a pugnacious attitude, even if outwardly they conform or manage to avoid any sharp issue. People who are not in harmony with the conventions of their own set tend therefore to be prickly and uncomfortable and lacking in expansive good humor. These same people transported into another set, where their outlook is not thought strange, will seem to change their character entirely. From being serious, shy and retiring they may become gay and self-confident; from being angular they may become smooth and easy; from being self-centered they may become sociable and extrovert [sic]." (104-05)


(Photo by Fred Armitage.)

Monday, February 9, 2009

What I want to know about the stimulus bill

Here's what I want to know about the stimulus bill, apropos of Obama's outrage about congressional Republicans' opposition/resistance to it: Why almost a trillion dollars, right now?

I have very little confidence that people have actually figured out whether this thing is going to work. Now, that doesn't mean it's a bad idea -- maybe we should try out a gamble even if it might very well not work, because that'd be better than doing nothing.

But why don't we find a middle ground where we spend some of it -- say, 100 billion dollars -- then study what effects it's having and decide what to do next?

Full disclosure, I don't really know what I'm talking about. If there's some reason it has to be done this way, then please explain why in the comments. I'd be curious to hear why I'm wrong. [UPDATE: A commenter has taken me up on this.]

This piece shows how much money we spent on other major projects. For instance, the Marshall Plan to rebuild Europe after World War II cost the equivalent of $115 billion in today's dollars. In other words, we could start out spending a historically enormous amount of money, while still spending just a tiny fraction of the $800-billion figure.

I was thinking about this after watching Megan McArdle's hour-long rant against the stimulus:



I especially like how she takes down the idea of economists as experts. Key points:

(1) Economists have never tested their theories that supposedly support the stimulus, because it would be impossible to do so. Any historical parallels are too different from the current situation. So there are too many confounding variables to be able to draw a scientific conclusion.

(2) Even if McArdle is wrong about that, the way economists could prove her wrong would be to make specific predictions about what effect the stimulus will have, and stake their professional reputations on it. She says none of them will do that.

Ah, but they talk about this paper, which does make predictions. Isn't that a counterexample? Well, I don't think so. Are those economists really going to accept any personal consequences if their predictions turn out to be wrong? I assume they'd say either that unexpected contingencies got in the way, or the stimulus wasn't enacted in the exact way they would have liked. And sure enough, the paper savvily includes this paragraph, loaded with caveats:

It should be understood that all of the estimates presented in this memo are subject to significant margins of error. There is the obvious uncertainty that comes from modeling a hypothetical package rather than the final legislation passed by the Congress. But, there is the more fundamental uncertainty that comes with any estimate of the effects of a program. Our estimates of economic relationships and rules of thumb are derived from historical experience and so will not apply exactly in any given episode. Furthermore, the uncertainty is surely higher than normal now because the current recession is unusual both in its fundamental causes and its severity.
Translation: "Don't blame us if it doesn't turn out the way we said it would."

Of course, if the stimulus is enacted and has fantastic results, you can bet they'll say, "See, we told you it would work."

Karl Popper said people who claim to be scientific but don't make falsifiable predictions are engaging in pseudo-science. So, isn't economics a pseudo-science?

And if a pseudo-science is the main authority for people's belief that the stimulus is a good idea, then we should be a lot more cautious than we're being. A trillion dollars -- which, as Sen. Mitch McConnell correctly pointed out, is more than the amount you'd spend if you spent a million dollars a day from the supposed birth of Jesus to now -- just seems like way too much money to blow in one shot on a wild gamble.

Oh, one other slight problem:




UPDATE: Bellwether alert! "If the Dems have lost JAC on this, they've lost the country."

Wednesday, January 28, 2009

How to be Malcolm Gladwell

"In the winter of 1963, Hakeem Olajuwon was born to the owners of a cement business in Lagos, Nigeria. "They taught us to be honest, work hard, respect our elders, believe in ourselves," Olajuwon once said of his parents. In his middle-class childhood, Olajuwon played handball and soccer, but it was not until the age of fifteen that he was exposed to basketball. After entering his first tournament, he realized that he was remarkably skilled at the sport. Within two years he had arrived in Texas, where he played for three seasons at the University of Houston. In 1983, he won the NCAA Tournament Player of the Year Award; he also led the Houston Cougars to two straight NCAA championship games. As the number one pick in the NBA draft in 1984, he could boast of being chosen two spots ahead of Michael Jordan. NBA analysts now consider him to be one of the twenty best players in the history of professional basketball.

"Olajuwon is just over 6'10." He perfectly exemplifies what might be called the Height Trumps Experience Rule, which I have just coined. This rule stipulates that people who are at least a foot taller than the average height will excel at a chosen sport, especially when height is an advantage in that sport. The rule also obtains when the individual in question discovered the game relatively late in life, and spent little time practicing during his or her youth. It sheds light on a variety of hitherto unexplained phenomena. I hope to be recognized for it."

Monday, January 5, 2009

The philosopher paradox

"Philosophers should be people who think especially well, but to have decided upon a career in philosophy marks you as irrational. How do you deal with that raging incoherence?"

That's my mom responding to a report on the hard economic times for philosophers.

Based on that report, it seems that philosophers at the latest American Philosophical Association conference have gotten desperate for topics. Their papers and panels at the conference included the following:

Philosophical Perspectives on Female Sexuality

Depression, Infertility and Erectile Dysfunction: The Invisibility of Female Sexuality in Medicine

Analyzing Bias in Evolutionary Explanations of Female Orgasm
Can you detect the subtle theme?

I'm not sure what the point of philosophy is, if that's what it's become.

But then, I've never quite understood the point of philosophy anyway. In the early days of this blog, I wrote:
I agree with what John Searle says in an interview in What Philosophers Think: that skepticism about the existence of the-real-world-as-we-know-it is like Zeno's Paradox: an intriguing, mind-bending puzzle that smart people will mull over but then quickly move on from, to focus on more important philosophical problems. You don't let Zeno's Paradox reshape your whole view of what philosophers do -- they're not on a mission to explain how there can be motion. But that seems to be roughly what's happened with analytic philosophy, thanks largely to Descartes. (Thus, my philosophy professor felt the need to qualify the steps of an argument with, "Assuming you believe that tables and chairs really exist ...")

This is one problem with studying philosophy: you're constantly told that you need to see certain things as problems. But they're not "problems" like "How do we fix the health care system?" or "How do we reduce crime?" In other words, they're not things that a normal person who's completely unfamiliar with the field would perceive as problems in need of solutions.

Of course, you could find problems in other fields that wouldn't be understood on their face as problems because they're laden with jargon or esoteric concepts. If these are real problems, though, they can at least be "understood" insofar as an expert can patiently explain the goal to a layperson: "It's important for us to figure out ____ because it could help us find a cure for such-and-such a disease," or whatever it does.

Even after spending hours and hours studying the philosophy of language (to take another example), I'd be hard-pressed to make the case that it's important for anyone to devote their life to explaining how it is that we can mean things through words. If you're like 99+% of humankind, you just accept that we do this, and move on with your life. And it seems pretty clear that if there's an option -- a perfectly feasible, easy option -- of just saying, "Oh well!" and moving on with your life ... and if this isn't a mere luxury enjoyed by some of the people while other people have to worry about it, but in fact the world would be just fine if no one worried about it ... then it's just not much of a "problem" at all.

That's my anti-philosophy philosophy.
And it's another example of the paradox my mom identified: if you're so brilliant at analyzing the world,* then why haven't you done a utilitarian calculus to figure out the extremely low probability that your philosophizing is going to accomplish anything?

* And have no doubt that philosophers are at least implicitly purporting to be brilliant. The philosopher Thomas Nagel has even made it explicit, saying that you should be "supersmart" to be a philosopher.


UPDATE: Church of Rationality gives a shot at answering that last question, declaring it the "Snarl of the Month." Or is it the Snark of the Month?