Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Sunday, March 13, 2016

Hilary Putnam (1926 - 2016)

Harvard Professor Hilary Putnam died today at age 89. That website says:

Putnam was a tremendously influential philosopher, working across a broad range of fields, including philosophy of mind, philosophy of science, philosophy of language, philosophy of math, and moral philosophy.
Wikipedia says:
He was known for his willingness to apply an equal degree of scrutiny to his own philosophical positions as to those of others, subjecting each position to rigorous analysis until he exposed its flaws. As a result, he acquired a reputation for frequently changing his own position.
Wikipedia also notes that he was a computer scientist.

Here's Martha Nussbaum on what Putnam can offer an America that seems less interested in philosophy than it used to be.

Two Putnam quotes from A Dictionary of Philosophical Quotations show his facility at refuting arguments. This is Putnam on the mind-body problem:
According to functionalism, the behaviour of, say, a computing machine is not explained by the physics and chemistry of the computer machine. It is explained by the machine's program. Of course, that program is realized in a particular physics and chemistry, and could, perhaps, be deduced from that physics and chemistry. But that does not make the program a physical or chemical property of the machine; it is an abstract property of the machine. Similarly, I believe that the psychological properties of human beings are not physical and chemical properties of human beings, although they may be realized by physical and chemical properties of human beings.
(You can read that quote in context here.)

And this is Putnam on logical positivism:
A.J. Ayer's Language, Truth and Logic spread the new message to the English-speaking philosophical world: untestable statements are cognitively meaningless. A statement must either be (a) analytical (logically true, or logically false . . .) or (b) empirically testable, or (c) nonsense, i.e. not a real statement at all, but only a pseudo-statement. . . . An obvious rejoinder was to say that the logical positivist criterion of significance was self-refuting: for the criterion itself is neither (a) analytic (unless, perhaps, it is analytically false!), nor (b) empirically testable. Strangely enough this criticism had very little impact on the logical positivists and did little to impede the growth of their movement.
(You can read that quote in context here.) In fairness, A.J. Ayer himself later repudiated much of Language, Truth, and Logic.

When an obituary is posted to Metafilter, the community blog, you'll typically see many commenters posting a single period to represent a moment of silence. So you'll see a long string of comments that are just:
.
.
.
That's been happening on the obituary post for Hilary Putnam, but one commenter did a variation on that, writing this as a moment of silence:
?

Wednesday, November 25, 2015

This is why we need more philosophers!

If Marco Rubio had thought more carefully about the relationship of the state to the people, perhaps he wouldn't have made this ridiculous statement.

(See my live-blog of the last Republican debate at 9:11.)

Monday, February 2, 2015

"There are moments when there is nothing more urgent than the defense of what has already been accomplished."

In an insightful article called "Crimes Against Humanities," about the need to prevent "science" from "invading the liberal arts," Leon Wieseltier responds to Steven Pinker's essay, "Science Is Not Your Enemy," in the New Republic (this is from last year, back when they were New Republic editors):

The question of the place of science in knowledge, and in society, and in life, is not a scientific question. Science confers no special authority, it confers no authority at all, for the attempt to answer a nonscientific question. It is not for science to say whether science belongs in morality and politics and art. Those are philosophical matters, and science is not philosophy, even if philosophy has since its beginnings been receptive to science. Nor does science confer any license to extend its categories and its methods beyond its own realms, whose contours are of course a matter of debate. The credibility of physicists and biologists and economists on the subject of the meaning of life . . . cannot be owed to their work in physics and biology and economics, however distinguished it is. The extrapolation of larger ideas about life from the procedures and the conclusions of various sciences is quite common, but it is not in itself justified; and its justification cannot be made on internally scientific grounds, at least if the intellectual situation is not to be rigged. Science does come with a worldview, but there remains the question of whether it can suffice for the entirety of a human worldview. . . .

Rejecting the various definitions of scientism—“it is not an imperialistic drive to occupy the humanities,” it is not “reductionism,” it is not “naïve”—Pinker proposes his own characterization of scientism, which he defends as an attempt “to export to the rest of intellectual life” the two ideals that in his view are the hallmarks of science. The first of those ideals is that “the world is intelligible.” The second of those ideals is that “the acquisition of knowledge is hard.” Intelligibility and difficulty, the exclusive teachings of science? This is either ignorant or tendentious. Plato believed in the intelligibility of the world, and so did Dante, and so did Maimonides and Aquinas and Al-Farabi, and so did Poussin and Bach and Goethe and Austen and Tolstoy and Proust. They all share Pinker’s denial of the opacity of the world, of its impermeability to the mind. They all join in his desire to “explain a complex happening in terms of deeper principles.” They all concur with him that “in making sense of our world, there should be few occasions in which we are forced to concede ‘It just is’ or ‘It’s magic’ or ‘Because I said so.’”

If Pinker believes that scientific clarity is the only clarity there is, he should make the argument for such a belief. He should also acknowledge its narrowness (though within the realm of science it is very wide), and its straitening effect upon the investigation of human affairs. Instead he simply conflates scientific knowledge with knowledge as such. In his view, anybody who has studied any phenomena that are studied by science has been a scientist. It does not matter that they approached the phenomena with different methods and different vocabularies. If they were interested in the mind, then they were early versions of brain scientists. If they investigated human nature, then they were social psychologists or behavioral economists. . . . If they contributed to knowledge, then they must have been scientists, because what other type of knowledge is there? . . .

[I]t was the imperative to keep up, to be “progressive,” which led to “the disaster of postmodernism” and other unfortunate hermeneutical fashions of recent decades. More importantly, the humanities do not advance the way the sciences advance. . . . The history of science is a history of errors corrected and discarded. But the vexations of philosophy and the obsessions of literature are not retired in this way. In these fields, the forward-looking cast backward glances. The history of old art and thought fuels the production of young art and thought. Scientists no longer consult Aristotle’s scientific writings, but philosophers still consult Aristotle’s philosophical writings. The present has the power of life and death over the past. It can choose to erase vast regions of it. Tradition is what the present calls those regions of the past that it retains, that it cherishes and needs. Contrary to the progressivist caricature, tradition is not the domination of the present by the past. It is the domination of the past by the present. . . .

There are moments when there is nothing more urgent than the defense of what has already been accomplished. . . . Sometimes wisdom is conventional. The denigration of conventional wisdom is itself a convention. . . .

The technological revolution will certainly transform and benefit the humanities, as it has transformed and benefited many disciplines and vocations. But it may also mutilate and damage the humanities, as it has mutilated and damaged many disciplines and vocations. My point is only that shilling for the revolution is not what we need now. The responsibility of the intellectual toward the technologies is no longer (if it ever was) mere enthusiasm. The magnitude of the changes wrought by the new machines calls for the revival of a critical temper. Too much is at stake to make do with that cool vanguard feeling. But Pinker is . . . waxing on like everybody else about how “this is an extraordinary time” because “powerful tools have been developed” and so on. . . . With his dawn-is-breaking scientistic cheerleading, Pinker shows no trace of the skepticism whose absence he deplores in others. His sunny scientizing blurs distinctions and buries problems. If there was one thing for which the humanities, the old humanities, the wearyingly traditional humanities, could be counted on, it was to introduce us also to the darkness and prepare us also for the worst.
(Here's Pinker's response, followed by another response from Wieseltier.)

Saturday, May 26, 2012

Why reductionism about the self and other philosophical subjects is so common and so wrong

There's a new book called The Self Illusion by Bruce Hood. In an interview by Sam Harris, Hood explains his premise that the belief that you have a "self" is an illusion:

For me, an illusion is a subjective experience that is not what it seems. Illusions are experiences in the mind, but they are not out there in nature. Rather, they are events generated by the brain. Most of us have an experience of a self. I certainly have one, and I do not doubt that others do as well – an autonomous individual with a coherent identity and sense of free will. But that experience is an illusion – it does not exist independently of the person having the experience, and it is certainly not what it seems. That’s not to say that the illusion is pointless. Experiencing a self illusion may have tangible functional benefits in the way we think and act, but that does not mean that it exists as an entity.
Now, here's Will Wilkinson explaining his problem with that (he uses the word "eliminativism" where I use "reductionism"):
Right off, I get red flags. Eliminativism of all sorts -- about morality, consciousness, free will, the self -- is frequently motivated by what I like to call the “fallacy of disappointed expectations.” The heart of the fallacy is to accept at the outset that the nature of the self, for example, is precisely what an extravagantly metaphysical, often religious, account says that it is. Then one observes that there exists little or no evidence in support of that account. One then concludes, having already simply assumed that the self (or free will or consciousness or moral reasons) could not be something less grand, that there is no self (or free will or consciousness or morality). If the self isn’t a hard gem-like flame literally flickering somewhere east of the pancreas, then there is no self! Usually arguments from disappointed expectations are advanced in a spirit of excited, self-congratulation, as if reasoning poorly were the same thing as staring bravely into the abyss.
I strongly agree with Wilkinson — about the self and all his other examples (free will, consciousness, morality). He's articulating something I've noticed before, but I hadn't thought to put it in terms of "disappointed expectations."

When I was in college, I had a philosophy professor who seemed frustrated by the fact that many students wouldn't give the obviously correct answers to the most basic moral hypotheticals (e.g. whether it's morally better to care for a sick person or to eat babies). He said something to the whole class that was startingly rude but hard to deny: "The intelligence level tends to get turned down in philosophy classes."

Monday, November 1, 2010

Causation and correlation

Normally I don't link to conversations on Facebook, but Robert Wiblin's Facebook wall is open to the public to see his insightful musings. (If you use Facebook, I recommend friending him.) For instance, he says:

Correlation is not causation, but correlates with it.
My response:
Causation is not correlation, but causes it.
Then, someone asks whether all causation is "statistical." I give a whole book as an answer: Probabilistic Causality by Ellery Eells (1953-2006). As you might have guessed from the title, the author's answer is yes. Eells pointed out (to those of us who took his course on probabilistic causality at the University of Wisconsin - Madison) that, in addition to the ubiquitous refrain that correlation does not prove causation, there's also the less well-known fact that causation does not prove correlation. (For instance, if A causes both B and C, and C prevents B more strongly than A causes B, A won't be correlated with B.) That observation, along with Eells's whole theory, was the inspiration for my response to Wiblin.

Someone else links to this perfect XKCD comic:

Tuesday, October 5, 2010

What is the atheist / secular humanist / freethought community missing?

Over the weekend, I was reading the paper edition of this New York Times article about a schism in a secular humanist organization called the Center for Inquiry. It's written as a profile of the Center's "exiled founder," Paul Kurtz, who has a different vision of secular humanism or atheism than the new leader, Ronald Lindsay. Things have turned very sour between them:

The center’s donations have fallen since Mr. Kurtz’s departure, which prompted warring blog posts between his defenders and Mr. Lindsay’s. Matters have not improved: on Wednesday, when Mr. Kurtz stopped by the center, where he still keeps an office, he found the locks had been changed. Mr. Lindsay told me that Mr. Kurtz did not need the new key because he “has no connection with us.”
I was surprised that this fact was put in the very last paragraph of a 5-column article. Isn't that the big news? The new leader is so hostile to the founder that the former effectively kicked the latter out of his office (just a few days before the article went to press). Instead, the article leads with a description of Kurtz's dogs — who are all named after famous "free thinkers" (John Dewey, Bentham, Voltaire) — greeting the reporter in Kurtz's driveway.

Anyway, what's the substance of the schism?
In books like “What Is Secular Humanism?” Mr. Kurtz has argued for a universal but nonreligious ethics, one he now calls “planetary humanism.” Its first principle is that “every person on the planet should be considered equal in dignity and value.” In his books, he explains how this principle can be derived from nature and from what we know of the human species.

And he contrasted his affirmative vision with recent projects under Mr. Lindsay, like International Blasphemy Day. (The 2010 version, held Thursday, was renamed International Blasphemy Rights Day.) Mr. Kurtz was also a vocal critic of a contest for cartoons about religion that included some entries that could be considered deeply offensive.

Angry atheism does not work,” Mr. Kurtz said. “It has to be friendly, cooperative relations with people of other points of view.”
Lindsay defended his blasphemy day in a blog post:
Two points. Although blasphemy may not, at present, be legally prohibited in the United States, many still hold the view that criticizing religion is socially unacceptable. Religion is considered a taboo subject.

I disagree. Placing religion off limits in social discourse is just another, gentler way of prohibiting examination and criticism of religion. In my view, all subjects of human interest should be open to examination and criticism by humans. . . .

Second, as many of you may know already, blasphemy remains very much a live legal issue in many countries --and therefore, remains a live issue for anyone concerned about human rights. Call a Teddy Bear "Muhammad" in some Islamic countries and your risk losing your head. Moreover, there have been repeated efforts --successful efforts I might add --to have various United Nations bodies condemn so-called "defamation of religion." This is a prohibition of blasphemy by another name.
I admire Lindsay's concern for free speech rights around the world. He makes a reasonable argument — when you look at it from a coldly rational standpoint. But there are always many different ways you could make a single point, and someone as smart as Lindsay surely realizes that people react not only to well-reasoned arguments but to the emotional impact of words. He could have still made his substantive point about blasphemy without putting the word "blasphemy" in the title of his event.

As I said, the NYT article is 5 columns long (which isn't very long — each column was a short fraction of the whole page). We see a photo of the 84-year-old Kurtz sitting in his armchair, holding his dog John Dewey, with an expanse of books behind him (books presumably written by the likes of his dogs' namesakes). These were the first 5 of 6 columns on the page, but the 6th column on the page got my attention. It was a few small text ads under the heading "Religious Services." One of the ads said this:
Love and Completeness are Your Spiritual Right:

Say Goodbye to Loneliness, Fear, & Lack! . . .

> Doors open 6:30

> Inspirational music at 7:00

> You are Welcome

> Child Care Provided
This is what "atheist," "secular humanist," or "freethought" organizations aren't offering people.

I don't think the main obstacle for secular humanists is that they're too "angry" (Kurtz's word) or too critical of religious people. Negativity can actually be quite effective. People of all religious stripes will vehemently criticize the societal elements they consider noxious; they'll criticize other religions and worldviews; they'll even criticize other people and strands of thought within their own religion. To criticize atheists or secular humanists for criticizing too much is missing their real shortcoming.

It's all too easy to dismiss the recent, popular "new atheist" books as if they're the real problem with secular humanism. This has become an obligatory flourish for secular humanists who are trying to position themselves as moderate and reasonable: "I'm not like those angry atheists, Christopher Hitchens and Sam Harris." (This is often said by those who haven't read Hitchens and Harris closely enough to know that they're not identical; for instance, they disagree profoundly about spirituality.)

The problem with secular humanists isn't their negativity, but their lack of a positive message that matters to most people. As brilliant and subtle and right as the books in Kurtz's library might be, most people aren't interested in reading philosophical treatises. Secular humanism might have no shortage of reason and insight for those who are interested, but how many people (other than academic elites) are actually interested?

Most people don't look to philosophically coherent doctrines for guidance in how to live; they care more about belonging to a community. And I don't mean "community" in the abstract sense in which we've become accustomed to using it ("the gay community," "the international community," etc.). I mean real community made up of your actual neighbors.

I have never seen a self-proclaimed atheist or secular humanist advertising an event with phrases like "You" — whoever you are! — "are Welcome," or "Child Care Provided." If secular humanist organizations want to become more of a force for good than religion is, they need to create communities that are meaningful enough that people will turn to them, by default, if they need someone to help take care of their children.

UPDATES: Lots of discussion in the comments. Also, someone on Twitter tells me that "child care is always provided at the @fofdallas" — referring to the Fellowship of Freethought in Dallas.


Short URL for this post: goo.gl/q7IR

Monday, March 22, 2010

The 12 books that have influenced me the most

Tyler Cowen started this meme, in response to "a loyal reader" who told him:

I'd like to see you list the top 10 books which have influenced your view of the world.

Will Wilkinson, Matthew Yglesias, and many others have given their lists. There's no required number of books, but most people seem to be giving around 10.

Some of the recurring authors are Plato, Nietszche, John Stuart Mill, Ayn Rand, Friedrich Hayek, Hannah Arendt, Robert Nozick, John Rawls, Michael Walzer, Thomas Kuhn, Derek Parfit, Paul Johnson, and Thomas Sowell. This is all slanted by the fact that the meme was started by a libertarian economist, so the people who pick up his meme are going to be disproportionately libertarian.

Here's my list:

1. The View from Nowhere by Thomas Nagel. (Previously blogged by me and also my dad.) He looks at many of the classic philosophical problems (knowledge, free will, the meaning of life, etc.) in order to illuminate the frustrating interplay between the objective and the subjective, both of which are inescapably real. 

2. An Enquiry Concerning Human Understanding by David Hume.
3. Critique of Pure Reason by Immanuel Kant. Kant's book is famously badly written, while Hume's book is pretty clearly written for the 18th century. Both of them have to be confronted by anyone trying to understand the limits of understanding. They didn't create enduring theoretical frameworks, but they still made progress by waking us up from our "dogmatic slumbers" (as Kant said Hume had done to him).

4. Upheavals of Thought by Martha Nussbaum. Emotions aren't the opposite of reason — they contain intelligent thoughts and allow us to rationally interact with the outside world.

5. What's It All About? by Julian Baggini. An argument that the standard solutions to the meaning of life don't work.

6. Flow by Mihaly Csikszentmihalyi. (Blogged.) How to structure all the activities in your life to maximize happiness.

7. The Conquest of Happiness by Bertrand Russell. (Blogged.) You could file this under philosophy or self-help.

8. The Moral Animal by Robert Wright. (Blogged.) I'm sure there are more recent books on evolutionary psychology that are better supported (at least because more research has been done since 1994), and Wright himself admits that the theory has its shortcomings. But this book offers a compelling explanation of human behavior.

9. Mortal Questions by Thomas Nagel. He applies his dryly, lucidly analytical style to the kinds of questions that continental philosophy more often approaches with overwrought extravagance and obscurantism. The famous "What Is It Like to Be a Bat?" is one of many highlights; others include "Death" (blogged), "The Absurd" (blogged), "Sexual Perversion," and "The Fragmentation of Value."

10. The Mysterious Flame by Colin McGinn. Why we haven't, and aren't going to, solve the mind-body problem.

11. Rationality in Action by John Searle. A refreshing look at the problem of free will. (His shorter follow-up, Freedom & Neurobiology, deals with similar themes but also extends his analysis into political philosophy.)

12. Animal Liberation by Peter Singer. As the back cover says, he makes the case for a revolution in our concern for animals by reasoning from beliefs most people already hold. This is the one book about which I can say it has affected my life every single day for the past 20 years.

Looking over the list, I seem to have been most interested in thinking about thought and its place in our lives, with more emphasis on the inadequacy than the power of rational thought. This emphasis is rather awkward since any such analysis is itself an attempt to think rationally. The View from Nowhere captures this awkwardness explicitly.

Feel free to post a comment either listing the books that have influenced you the most, linking to your blog post with your list, or linking to other people's lists that you've found especially interesting.

Sunday, March 7, 2010

Why politics and policy are less important than music and art

I'm reading Confessions of a Philosopher, a 1997 book by Bryan Magee. It's the best long-form account I've read of how philosophical issues impose themselves on one's life. (I deliberately speak of the issues as animate things acting upon a passive person — this is a main theme of the book.)

In a chapter called "Mid-Life Crisis," Magee describes his anguished struggle with the problem of the absurd (which I recently blogged about at the end of this post). He says:

I used to look at people going about their normal lives with everyday cheerfulness and think: "How can they? And how can they suppose that any of what they're doing matters? They're like passengers on the Titanic, except that these people know already that they're headed for total and irremediable shipwreck. . . . Above all, I was baffled by the fact that the middle-aged, who were so close to death, tended to be even more cheerful than the young. . . .

Under the influence of these thoughts my values went through sea changes. Everything that was limited to this life and this world came to appear insignificant. Only what might possibly point beyond them, or have its basis outside them — beauty, art, sex, morality, integrity, metaphysical understanding — could even possibly be worth anything. . . . Success and fame were worse than nothing, because anyone pursuing them was actively throwing his life away. (253)
This leads him to contrast following politics (apparently as a hobby or an occupation) with experiencing art (again, apparently as an audience member or performer, amateur or professional). I was pleased to see his description, because it articulates why I've been feeling increasingly uninterested in politics:
Even on their own terms the politics and business of the world were absurdly evanescent. One week politicians, people who worked in the City, and people whose job it was to report their doings would all be kept out of their beds by a financial crisis which, six months later, would be little talked of. By that time perhaps there would be . . . a corruption scandal in local government, which would then be followed by a flurry of public concern over crimes of violence, which in its turn would be pushed out of people's minds by their fury over some proposed new tax; and so it would go on. Each of these things would seem important for a time, then each would pass away and scarcely matter again except to historians. In fact, the truth is that most of them made little or no difference even to the daily lives of most of the population living through them. People immersed in this stream of ever-changing events were filling their minds with . . . ephemera and trivia, what people in electronics mean by "noise." (254)
I should note that he was a Member of Parliament for about 10 years, so he's not simply apathetic about politics by nature.

Not only do I agree with that passage as a description of current-day American politics (even though it was written in the UK in the '90s), but I find it especially silly that people get so worked up about one tax or one appropriations bill without seeming to care much about what taxes are like on the whole, or how much the country spends on different kinds of things overall. The specific bills that happen to be pending in Congress can only be validly assessed against this backdrop of broader understanding. But the media rarely gives us this information for fear of seeming to lack "objectivity" (whatever that is). And those who aren't concerned about being objective are usually too unreliable to be taken seriously. A subtle, balanced analysis of the tax structure is never going to achieve the level of interest generated by a report on the latest dumb comment by Sarah Palin (for the left) or President Obama (for the right). Magee goes on:
It is not as if were no alternatives. Time spent listening to great music, or seeing great plays, or thinking about issues of lasting importance, was not in this category. In those cases the object of one's activities retained its interest and importance for the rest of one's life. If I spent an evening listening to Mahler's Third Symphony, that symphony was still going to matter to me in six months' time, or ten years, or thirty: it was part of my life, for always. In fact such things more often than not increased in interest and value with the passage of time. If I spent two or three months saturating myself in, let us say, recordings of Mozart's piano concertos, and then did not return to them like that for another four years or so, I would find when I came back to them that I engaged with them on a deeper level than before. And the same was true of most great art. . . .

There were times when I felt, after all, that I was living to the full in face of death. Many men of action who are also writers have described the bliss induced in them by the sound of bullets smacking past their ears, and said that it intensified their awareness of being alive to an intoxicating level. The things that came closest to doing this for me when I fully realized I was facing death were my love affairs and friendships, philosophy and the arts. Never have I reacted to these things more intensely than I did in my late thirties and early forties. It was as if Shakespeare and Mozart were addressing me personally. . . . Had it not been for my need to earn a living I would have immersed myself in them entirely. (254-5)
Although I was more than satisfied by this explanation, some would respond, "But what about political art?" His answer to this is, again, exactly how I feel:
Those that treated political, social or historical levels of explanation as fundamental now seemed to me to be treating externals and surfaces as if they were foundations, and to be superficial and point-missing. In the world as it was at that time the most conspicuous example of this was Marxism, though there were others too. Marxism had a complete explanation of the arts in terms of political power, economic interests and social classes, and this seemed to me a grotesque attempt to explain the greater in terms of the less. Not only was there a lot of Marxist criticism around at that time, there were innumerable Marx-influenced stage productions which had the effect of superficializing the works they dealt with for precisely this reason, that they treated social and political externals as fundamental, while remaining oblivious to what actually was fundamental. Arguing with people who produced or supported this kind of thing was a dislocating experience, because it seemed self-evident to them that the metaphysical, personal and interpersonal dimensions of things were of secondary importance compared with the social and political. Indeed, they often denied that there was any metaphysical dimension at all, either to reality or to works of art. (255)
Some people will respond: "But art should say something about society. It shouldn't be just meaningless fluff to make you feel good. It should disturb people and wake them up to social injustices." (Yes, I've heard all of this said.) Of course I agree that art can say important things about society and that this can be a fine thing to do. It's not that I totally dismiss this function, and I don't think Magee does. But these aren't the most important functions of art, nor are they requirements of great art.

And as for anyone who considers art "meaningless" if it doesn't contain a social critique, or if isn't "appreciated in the social context in which it was made," I feel sorry for them for what they're missing . . .



(That's Mozart's Piano Concerto No. 20, conducted and performed by Mitsuko Uchida.)

Saturday, February 13, 2010

Paradoxical theories of language, knowledge, and the absurd

My mom points out "the paradox of 'insisting' that words have no 'fixed or stable set of meanings'":

If you really believed what you are insisting, you wouldn't be insisting, you'd be, perhaps, entertaining a suggestion or toying with a notion or musing about the possibility, now wouldn't you?
She's reacting to this description posted on a wall in the Art Institute of Chicago describing an artwork by Bruce Nauman:
Human Nature / Life Death . . . insists on language's inability to deliver a fixed or stable set of meanings, conveying a deep suspicion about what constitutes truth, especially in the public realm.
She adds a great detail:
[W]hen I voiced these thoughts (to Meade) the museum guard overheard, laughed, and nodded knowingly.
It reminds me of one of my philosophy professors from back when I attended the University of Wisconsin, Keith Yandell, who has a knack for devastatingly concise refutations of theories that contradict themselves. For instance, he defined empiricism as the theory that we can only gain knowledge through sensory experience. Then he pointed out that this theory itself is not known to be true through sensory experience.

Another example of this kind of paradox (is there a name for it?) is the problem of the absurd. It's supposed to be a profound problem that our lives are "absurd," in the philosophical sense. That is, you take your life very seriously from day to day, but you can also take a step back and wonder if the whole thing is ultimately pointless, meaningless.

Now, that problem -- the problem of the absurd -- is itself a paradox, but it's not the kind of paradox that this blog post is about. The paradox I want to focus on is one that Thomas Nagel pointed out in his wonderful book Mortal Questions. The problem of the absurd contains a couple of subtle internal contradictions. And if you grasp these contradictions, you may start to feel that the absurd is not such a problem at all -- or at least, not a deeply troubling one. Nagel explains:
[A]bsurdity is one of the most human things about us: a manifestation of our most advanced and interesting characteristics. . . .

If . . . there is no reason to believe that anything matters, then that does not matter either, and we can approach our absurd lives with irony instead of heroism or despair.

Wednesday, November 18, 2009

Why isn't there "philosophy of journalism"? Or how about journalism of philosophy?

There should be courses in "philosophy of journalism," says Professor Carlin Romano. He teaches such a course at Yale. (The article is via Arts & Letters Daily.)

Prof. Romano frames the issue this way: 

If you examine philosophy-department offerings around America, you'll find staple courses in "Philosophy of Law," "Philosophy of Art," "Philosophy of Science," "Philosophy of Religion," and a fair number of other areas that make up our world.

It makes sense. Philosophy, as the intellectual enterprise that in its noblest form inspects all areas of life and questions each practice's fundamental concepts and presumptions, should regularly look at all human activities broad and persistent enough not to be aberrations or idiosyncrasies. ...

Why, then, don't you find "Philosophy of Journalism" among those staple courses?
Listing those topics creates a sense that you could have a philosophical field to correspond to every profession, but things don't work out so neatly. "Philosophy of art" is trying to penetrate the very nature of what artists create by asking, "What is art?" I don't think "philosophy of journalism" would be about trying to define journalism or explain what journalists do, since that wouldn't be a very challenging philosophical task.

Based on Prof. Romano's description of his lesson plans, he seems to be using journalism as a platform to discuss ethics, epistemology, and political philosophy. Journalism isn't a sui generis subject of philosophical inquiry; it's a bundle of human interactions that can be analyzed philosophically within traditional branches of philosophy that have existed for centuries. (In this respect, "philosophy of religion" is closer to "philosophy of journalism" than to "philosophy of art." Trying to define "religion" may be a worthwhile exercise, but it's unlikely to be the main point of a philosophy of religion class.)

I'm actually so convinced by his argument that this kind of class is worth teaching that I don't find the article too interesting. Instead of an article about whether there should be a philosophy of journalism, I'd rather see some discussion of whether there should be journalism about philosophy.

The New York Times, for instance, regularly reports on some of the more socially important academic breakthroughs, even including some that happen to be of interest to philosophers. But I can't remember seeing the Times directly report on a philosopher's ideas -- except in an obituary. You regularly read news articles about how the latest brain experiment has revealed so-and-so. Well, that's how the news likes to present it, but the truth is rarely so clear-cut or sensational. A headline-grabbing story based on brain scans is probably going to be highly conjectural, in part because brain imaging doesn't yet have much explanatory power.

Could any philosophical insight about the brain and/or the mind be significant enough to be reported in the New York Times? I'm sure reporters would say philosophical thoughts are too abstract to count as "news" at all. But philosophers of mind should stay sufficiently up to date with the latest neurological discoveries so that their philosophizing actually is timely.

I wish we lived in a world where philosophical ideas routinely made the news. I'm not sure if the journalists or the philosophers are more to blame. Probably the philosophers.

Thursday, August 13, 2009

"We were talking about Kant's categorical imperative. And that's basically the Golden Rule, right?"

That's how my philosophy professor began class one morning.

"No," responded a student. (OK, it was me.)

"Good, you didn't fall into my trap."

Unfortunately, Errol Morris, the acclaimed documentarian, falls into the trap in his piece for the New York Times about lying — "Seven Lies About Lying."

Morris's lie-about-lying #4 is, "Lying can never be justified" — "one should always tell the truth." He correctly attributes this view to Kant. Unfortunately, he adds:

It was linked to his "categorical imperative," Kant's version of the Golden Rule. Would you like others to lie to you? Then don’t lie to others.
That part in bold is the classic mistake about Kant's ethics. Morris tries to support it with a footnote quoting Kant's Critique of Pure Reason:
"I cannot wish for a general law to establish lying be-cause no one would any longer believe me, or I should be paid in the same coin."
The key word here that refutes Morris's interpretation is "cannot." This should be taken literally: it's about whether it's possible for you to want everyone to follow this general rule, not about whether you would actually like for everyone to lie. Kant thought it's impossible for everyone to follow a rule of lying for personal gain. After all, if everyone followed that rule, no one would be able to trust anyone's statements. Thus, lies would become ineffective, since lies only work if people generally trust other people's statements. The idea of a world in which everyone follows a rule of lying for personal gain isn't merely unsavory; it's self-contradictory. Since you can't conceive of something self-contradictory, you cannot wish for a world where everyone followed the rule. Consequently, you shouldn't follow this rule; in other words, you shouldn't lie.

(That's my off-the-cuff rendition of Kant. I haven't recently read the primary sources, so it might not be perfect. If you'd like to read a more rigorous explanation — using the more traditional Kantian terminology of "universal maxims" and so on — you could try this blog post.)

In fact, the whole foundation of Kant's theory was that people should be guided by reason, not by their personal preferences. The Golden Rule — "Do to others as you would like them to do to you" — directly refers to your personal preferences. It's not surprising, then, that Kant actually criticized the Golden Rule.

The Golden Rule is more self-centered than Kantian ethics. In the standard formulation, the Rule refers to "you" twice in one short sentence: it's about what you would like to have done to yourself. Since different people have different desires about how they'd like to be treated, this implies a relativistic moral code. Taken literally, the Rule may provide wildly different advice to different people based on their idiosyncratic traits.

But I also have a deeper problem with the Golden Rule's invocation of what-you'd-like-done-to-yourself. Even if we put aside concerns about whether it's too relativistic or unstable, there's still the unanswered question of where these desires come from. Why do you want anyone — even yourself — to be treated a certain way? The Golden Rule seems to take this as a given, but the question of what people want — or should want — is hardly simple to answer. It would seem that an explanation would need to come from something beyond the Golden Rule itself. And perhaps that something is actually more fundamental to ethics.

There's another problem with equating the Golden Rule with the categorical imperative: the Golden Rule is, at least on its face, just about how to treat others. Kant saw ethics as including how you should treat yourself. (For instance, one of his most famous examples of the categorical imperative is his argument that suicide is morally impermissible. While suicide does hurt others, Kant was more concerned with the wronging of oneself.) I don't subscribe to Kant's ethical theory, but I at least give him credit for trying to address profound moral questions that the Golden Rule doesn't even touch.

Monday, June 15, 2009

Free will: the horse and the engine

I'm planning to do a few posts on the problem of free will. It's a daunting topic to even begin to talk about, partly because it touches on everything you do in your life, and also because it's one of those philosophy problems that's been debated for thousands of years with no consensus; what are the odds that your attempt to solve it is going to be convincing?

One of my professors in law school made this comment to introduce a case we were going to study that day: "When the Supreme Court granted cert, most people agreed that this was a very easy case. They just couldn't agree on whether it was an easy reversal or any easy affirmance" (the two possible outcomes). The free will problem is like that. Everyone thinks the answer is so easy and obvious it's hardly worth talking about. The problem is, people have that feeling equally strongly on both sides (or should I say on all three sides?) ... so maybe it's not so simple.

Before I get into the substance of the debate, I want to share a passage that's not explicitly about free will, though the author later connects it to free will. This story dates back to the 19th century, when a new railway had been built in Germany:

When it reached the village of a certain enlightened pastor, he took his people to where a locomotive engine was standing, and in the clearest words explained of what parts it consisted and how it worked. He was much pleased by their eager nods of intelligence as he proceeded.
But on his finishing they said: "Yes, yes, Herr Pastor, but there's a horse inside, isn't there?" ...
It is ... a great effort to think of all the parts working together to produce the simple result that the engine glides down the track. It is easy to think of a horse inside doing all the work. A horse is a familiar totality that does familiar things.*
I find it interesting that he uses this anecdote to leverage his "compatibilist" position on free will. By "compatibilist," I mean he believes we have free will, but it coexists with determinism. But you could just as well use the horse anecdote as a criticism of philosophers who aren't comfortable introducing free will into our picture of the world unless determinism remains intact. After all, for us in the scientific age, the idea of a deterministic machine is a "familiar totality that does familiar things."

* This is from an essay by R.E. Hobert (a pseudonym for Dickinson S. Miller, who I've never heard of either) called "Free Will as Involving Determination and Inconceivable Without It," from the anthology Metaphysics: The Big Questions, and originally appearing in Mind in 1934; the anecdote in turn comes from Friedrich Paulsen. I've added line breaks for readability.

Thursday, June 4, 2009

The time-management theory of appreciating art and thought

LemmusLemmus swiftly refutes Ayn Rand's philosophy in this post on his blog, The Church of Rationality. 

I've never read Ayn Rand. And now that I've read that blog post, I feel fine about missing out on her work. Is that ignorant or close-minded of me?

Well, LemmusLemmus goes on to say:

And that's it with Ayn Rand and me. Of course I could read all of her books and see whether she has addressed this rather obvious objection anywhere, but given that time is a scarce resource I prefer to spend mine on stuff that promises to be more worthwhile. The fact that pretty much everyone acts like this is the reason that most people who call someone's work overrated aren't terribly qualified to make that judgment.
This is the same point I quoted from a Metafilter commenter in the post about "simple concepts":
By the time you have paid enough attention to a work of art to know whether it was a waste of time to take seriously, it is already too late for the answer to be useful.
I also made this point in my post about "my problem with rap":
I have a finite amount of free time in my life for listening to new music. Like every other person in the world, I can't build up an encyclopedic familiarity with every music genre in existence, so the most I can do is thoroughly explore some of them while writing off others as not worth my time. That's a time-management strategy, not an objective judgment. I'm sure there's brilliant rap music that I'm missing out on. (I loved the Outkast song "Miss Jackson" from a few years ago, for instance.) But I've heard enough from rappers about "bitches," "hos," and "niggers" to decide: my time would be better spent on music that might not make a single controversial statement about society but is challenging to the listener in more unexpected ways.
For the sake of simplicity, from now on I'll refer to this general observation as "the time-management theory of appreciating art and thought."

Tuesday, May 19, 2009

Can you give a neurological or evolutionary explanation of love without debunking the whole idea of love?

Eric Schwitzgebel talks about his feelings for his young child in this post on The Splintered Mind. He says:

[I]f an evolutionary biologist comes along and tells me: “yes, but these feelings of 'love' are really just a bunch of neurons firing—these feelings have been naturally selected for so that parents would care for offspring long enough for them to pass along their genes,” I’d shrug my shoulders or perhaps ask for more details. But this mechanistic/evolutionary explanation wouldn’t in any way undermine my love for my daughter or debunk my belief that I truly love her. Why? Because I’m a naturalist and never presumed that love wouldn’t have this type of explanation.

However, I know people who don’t feel this way about love—someone named Ashley for example. For Ashley, real love cannot just be neurons firing because it was adaptive for her ancestors to have those neurons firing. Real love must have its source in something completely unrelated to the struggle for survival and reproduction. Naturalistic explanations terrify Ashley precisely because they do undermine her belief that she truly loves her children or partner.

But would/should these explanations debunk her belief that she loves her children? . . . [W]hat, in the end, does/should Ashley think about her belief in the existence of her love—is it (a) false or (b) just in need of revision? . . .

[W]e have no agreed-upon method for determining when a belief has been explained and when it has been explained away.
That last point is hugely consequential. It's something to keep in mind when reading the latest New York Times article about researchers who have conducted some experiment that conveniently solves a philosophical problem that's been debated for centuries. Anytime I see one of those articles, I'm betting the experiment doesn't really solve the philosophy problem — even under the generous assumption that their data have been collected using the best available methodologies and reported with scrupulous impartiality.

I'm an anti-reductionist. In other words, I'm skeptical whenever someone, having described how something works, says, "And that's all there is." Even if this person's description is accurate as far as it goes, it might not have gone far enough. One kind of analysis might reveal certain truths, while other equally valid truths are accessible only through other means.

So I don't feel that the very idea of love is threatened by neuroscience or evolutionary psychology. This isn't because I'm privy to some grand theory that unifies our intuitions about love with a scientific explanation of it. But I assume that one could have such a comprehensive understanding in an ideal world.

I don't know if anyone has done so yet. I certainly haven't. But the fact that there are huge areas of life that people haven't yet fully explained doesn't make me despondent or stop me from living my life as usual.

On AskPhilosophers, someone asks:
Suppose that a neuroscientist is studying love, and she discovers that romantic infatuation is caused by high serotonin levels, while attachment is caused by oxytocin. Has she actually learned anything about love? More generally, what is the significance of discovering neural or hormonal correlates to particular human emotions or behavior?
The philosopher Peter Smith responds, taking a view similar to mine:
Compare: someone who tells us about the chemical composition of the pigments used in Botticelli’s Primavera has told us something about the painting. But again such discoveries don’t help us understand the painting in the way that matters, as a work of art, as part of the human world: understanding that requires something quite different from chemistry....

If Mercutio whispers in Romeo's ear, "It's the serotonin, old chap", will that change his feelings for Juliet? Has his love been rudely unmasked, e.g. as just a desire for cheap chemical thrills?

I don't suppose Romeo is much in the mood to be distracted by such thoughts. But, waiting for Juliet's household to get to bed so he can climb up to her balcony, he might reflect how interesting the chemistry of love must be (and one day, when he has less pressing business to attend to, he must learn more about it).... Romeo is only too glad that he is young, his chemical systems are bursting with vim and vigour, and his brain still gets awash with serotonin at the sight of a pretty girl. He is very happy, so to speak, to go with the chemical flow.

So Romeo’s feelings for Juliet aren’t changed by reflecting on their neural causes any more than my belief that there is a screen in front of me and my desire for chocolate are changed by reflecting on their causes. And he’ll think that the fact that his feelings have a “chemical composition” no more shows that they are just chemistry (in any important sense) than the fact that our scientist showed that Primavera is just a load of old chemicals! His feelings have a role and place in his life and it is that which matters about them.

I'm with Romeo on this.

Wednesday, April 8, 2009

David Brooks on moral reasoning vs. moral instincts

David Brooks's latest column claims that we're undergoing a revolution in how we think about morality. The whole column is well worth reading, though I have a lot of disagreements with it.

Here's a basic outline of his argument:

1. Philosophers have traditionally assumed that "moral thinking is mostly a matter of reason and deliberation: Think through moral problems. Find a just principle. Apply it."

2. But "[t]oday, many psychologists, cognitive scientists and even philosophers embrace a different view of morality. In this view, moral thinking is more like aesthetics. As we look around the world, we are constantly evaluating what we see. . . . Moral judgments . . . are rapid intuitive decisions and involve the emotion-processing parts of the brain. Most of us make snap moral judgments about what feels fair or not, or what feels good or not. We start doing this when we are babies, before we have language. And even as adults, we often can’t explain to ourselves why something feels wrong."

3. "The question then becomes: What shapes moral emotions in the first place? The answer has long been evolution, but in recent years there’s an increasing appreciation that evolution isn’t just about competition. It’s also about cooperation within groups. Like bees, humans have long lived or died based on their ability to divide labor, help each other and stand together in the face of common threats. Many of our moral emotions and intuitions reflect that history. We don’t just care about our individual rights, or even the rights of other individuals. We also care about loyalty, respect, traditions, religions."

4. Brooks (who's normally referred to as a conservative) says this new understanding represents "an epochal change. It challenges all sorts of traditions. It challenges the bookish way philosophy is conceived by most people. It challenges the Talmudic tradition, with its hyper-rational scrutiny of texts. It challenges the new atheists, who see themselves involved in a war of reason against faith and who have an unwarranted faith in the power of pure reason and in the purity of their own reasoning." (In a clever twist ending, Brooks explains how it should even "challenge the very scientists who study morality.")


Here's my response (these numbers do not correspond to the above numbers):

1. The column seems derivative of Malcolm Gladwell's Blink ("rapid intuitive decisions ... snap moral judgments"), and shares one of its main drawbacks. As Gladwell himself concedes, it's problematic to hinge everything on gut feelings. If those, and not deliberative reasoning, are the best guide to truth, then how can you confidently say that racism, sexism, or homophobia are immoral? After all, many people's instincts are bigoted.

Brooks seems to recognize this when he says:

There are times, often the most important moments in our lives, when in fact we do use reason to override moral intuitions.
So he's saying that our emotions and instincts have moral validity except when they don't. That's limitedly helpful.


2. Brooks lists about 10 or 20 different values and, following the fashion among present-day intellectuals, announces with a flourish that they're all rooted in evolution. There's reason -- but also emotions! There's competition -- but also cooperation! Individuals -- community! And to make sure you remember that Brooks is a conservative, he lists "loyalty, respect, traditions, religions."

Well, if you list enough different facets of human behavior and attribute all of them to "evolution," it's almost a foregone conclusion that you can find moral goodness somewhere in evolution.

But Brooks isn't just taking nature as he finds it. Even assuming he's correct in everything he describes as evolutionary, there are also lots of evil behavior that are easy to explain in evolutionary terms (a few examples spring to mind: theft, rape, murder, war). One way or another, he has to sift through the good and bad in order to isolate what he considers good.

How can he do that if he doesn't have some preconception of what's good?

For instance, he says:
The evolutionary approach ... leads many scientists to neglect the concept of individual responsibility and makes it hard for them to appreciate that most people struggle toward goodness, not as a means, but as an end in itself.
Now, in that sentence, he's clearly viewing morality as much more than just a bundle of "aesthetic" reactions. He has a set of fundamental concepts ("individual responsibility," "goodness . . . as an end in itself"), and he's using them to analyze what kind of behavior counts as morally good.

Isn't there a term for that approach? Isn't it called "moral philosophy"? Or "moral reasoning"?

As much as he might like to draw a clear line between his view of morality vs. what "philosophers" do by using "reason," he himself is doing philosophy and relying on reason.


3. His premise that "psychologists" and "cognitive scientists" have corrected our previous view of morality is highly suspect. Even if you have perfect empirical information about how people form moral views, that doesn't necessarily tell you whether the views are right or wrong. But it's hard for me to say much more about this without seeing the specific studies he's thinking of.

[UPDATE: Hilzoy at Obsidian Wings makes the same point and goes into much greater depth than I've done here. Sample: "the research Brooks cites does not show what he seems to think it does, since the question how we make moral judgments on the fly is not, and does not answer, questions about the role of reasoning in morality."]


4. Brooks predictably caricatures the "new atheists" without engaging with any of their actual arguments. As with his general attack on moral philosophy, this critique is painted with such a broad brush that the result is analogous to one of those huge paintings that's just a solid color. We're told they rely too much on "reason" -- but where exactly has their reasoning gone wrong? It's hard to imagine that Brooks has actually read Hitchens's God Is Not Great, which explains how atheists can have the "feelings of awe [and] transcendence" that Brooks describes, or Sam Harris's The End of Faith, which embraces spirituality and acknowledges that a world filled with nothing but "reason" would be a cold and barren place.


UPDATE: More critiques of Brooks's column by John Schwenkler (The American Scene), Will Wilkinson, and PZ Meyers (Pharyngula). Meyers says:
I strongly urge that Mr Brooks try using his cerebral cortex in addition to his brain stem and hypothalamus when writing — that's another of those areas where emotional prejudices need to be supplemented with reason and knowledge.
And here's a cartoon about it! (Via Language Log.)

IN THE COMMENTS: My dad and I try to figure out what was really going on with Brooks's column.

Thursday, March 26, 2009

The problem of evil and animal suffering

My previous post talked about attempts to solve the problem of evil by appealing to free will. As I discussed, there are lots of general problems with that approach.

But you can make the problem of evil particularly acute by focusing on animal suffering. Let's assume there's no human being around to observe the animal, so that we rule out even a theoretical possibility that a human might learn some sort of lesson. For example, before humans even existed, there were animals experiencing pain. Can you reconcile this fact with the existence of a benevolent god?

A couple of logical but unappealing possibilities spring to mind. One is: "Animals simply don't have any awareness, feelings, etc., so their suffering isn't bad -- or, rather, it doesn't even make sense to talk about them suffering, just as it doesn't make sense to talk about a rock suffering."

At the other extreme: "Animals are conscious, they have free will, and they have souls. So, if God and evil are compatible on the theory that God gave us free will (so that we could be virtuous), then the same thing applies to animals."

I don't think most people find either of these extremes plausible. Most people seem to think animals are at least minimally conscious in that they can feel pain (for instance), but aren't as robustly conscious as humans -- they don't have free will or souls. (Of course, many philosophers prefer not to talk about anyone having free will or souls, but I'm trying to approach this in Christian-ish terms because of the problem of evil's salience within Christianity.)

OK, let's put all that to the side for now. Let's assume for the sake of argument that Leibniz's best-of-all-possible-worlds theory is correct -- that is, suffering is justified in the long run by the existence of free will, because free will is a precondition for virtue, and freedom entails the freedom to cause harm. Let's also assume (since I think most people agree) that animals don't operate at such a sophisticated level: unlike humans, they aren't capable of attaining virtue by exercising free will.

Doesn't it follow that animal suffering is a greater evil than human suffering?

In a typical debate over the moral status of animals, someone on the pro-animal side will make the point: "Animals, like humans, can feel pain. That gives them moral status — even if they don't have human intelligence, humans still have a responsibility to avoid cruelty to animals when possible."

The response is then going to be: "Even if you're right that both humans and animals can feel those initial stabs of pain, that overlooks a crucial distinction. Only humans can intellectually reflect on the experience over time. We have this profound experience that animals don't have."

Those who make this latter point often seem to assume it's an argument for caring more about human beings. On the contrary, though, our ability to reflect and "build character" — take Anne Frank's poignant conviction in the underlying goodness of humanity, for instance — seems to mitigate our suffering. Animals are left merely having suffered, without gaining anything from the experience.

Wednesday, March 25, 2009

Does free will solve the problem of evil?

I've been enjoying Bertrand Russell's concise refutations of influential philosophical arguments in his book History of Western Philosophy. Here's Russell's refutation of Spinoza's theory that your misfortunes only seem bad from your self-centered perspective, but cease to be problematic when seen as part of the universe as a whole:

I cannot accept this; I think that particular events are what they are, and do not become different by absorption into a whole. Each act of cruelty is eternally a part of the universe; nothing that happens later can make that act good rather than bad, or can confer perfection on the whole of which it is a part.

Now here's his refutation of Leibniz's argument that there's a benevolent God who made this "the best of all possible worlds." Leibniz said the best possible world would contain free will, so God created a world with free will, which explains why bad things happen: they're human acts of free will. There are many obvious problems with this argument — for instance, there's a lot of bad stuff in the world that's not caused by human action. But Russell's refutation is particularly clever:
A Manichaean might retort that this is the worst of all possible worlds, in which the good things that exist serve only to heighten the evils. The world, he might say, was created by a wicked demiurge [i.e. a demon], who allowed free will, which is good, in order to make sure of sin, which is bad, and of which the evil outweighs the good of free will. The demiurge, he might continue, created some virtuous men, in order that they might be punished by the wicked; for the punishment of the virtuous is so great an evil that it makes the world worse than if no good men existed.

It's a commonplace to ridicule Leibniz's view that God has ensured that we live in "the best of all possible worlds." I mean, Voltaire made fun of it in his novel Candide, so it must be wrong. I'm guessing that people will balk at the "best of all possible worlds" idea when phrased like that, but if you phrase it more gently — "Things work out for the best" — it seems hugely influential.

I agree with Russell's response to Spinoza: cruel acts aren't transformed into good by being absorbed into the whole universe. This might be why I'm generally indifferent to religion. Unlike many secularists, though, I don't believe that cruelty and suffering are "just there" and don't have any larger meaning in the grand scheme of things. I don't have any more interest in an "It's all meaningless" view than in an "It's all for the best" view. What I do believe is that even if things that happen in the world do have some kind of ultimate meaning, the suffering is still there, and it shouldn't be rationalized away.

This explains the overwhelming instinct, cutting across political lines, that torture is just wrong, period. Even those who argue for exceptions to society's general "don't torture people" rule tend to rely on scenarios where the suffering caused by torture is far outweighed by preventing others from suffering -- the classic "ticking bomb," etc. This still implies that suffering itself is the basic unit that we're looking at in making moral assessments. So people are quibbling over a very narrow exception — maybe an important exception, but not one that calls into question the fundamental "torture is bad" consensus.

And so, no one takes the position: "Hey, go ahead and torture as much as you like! It's sure to be a net plus in the end — it'll be a learning experience, or it will be a ringing affirmation of our own free will, or something." Well ... no one applies this to human beings. But it's regularly applied to God. Bizarrely, God is held to lower moral standards than humans are.

UPDATE: Church of Rationality remarks on that last sentence: "John Althouse Cohen puts in another application to Bartlett's Familiar Quotations..."

UPDATE: Continued here.

Friday, March 20, 2009

Punk epistemology: "All I know is that I don't know nothing"

Facebook friend status update, referring to the old punk band Operation Ivy:

_______ is on an Op Ivy bender.

That makes me want to listen to some "Knowledge" -- an Operation Ivy song covered here by Green Day (whose first live show ever using the name Green Day was also Operation Ivy's last show, according to Wikipedia):




Cf. Socrates in Plato's Apology:
For this fear of death is indeed the pretence of wisdom, and not real wisdom, being the appearance of knowing the unknown; since no one knows whether death, which they in their fear apprehend to be the greatest evil, may not be the greatest good. Is there not here conceit of knowledge, which is a disgraceful sort of ignorance?

And this is the point in which, as I think, I am superior to men in general, and in which I might perhaps fancy myself wiser than other men -- that whereas I know but little of the world below, I do not suppose that I know.

Back to the modern day ... don't forget about Operation Ivy's bassist, Matt Freeman, who went on to be the bassist for Rancid. The greatest punk bassist in the world (parental advisory: explicit lyrics):

Wednesday, February 25, 2009

Keeping an open mind on the mind-body problem, part 3

In my previous 2 posts on the mind-body problem (post 1, post 2), I criticized materialist philosophers -- that is, those who believe only the physical exists and thus deny the existence of any kind of mind distinct from one's physical body. As I said (quoting Thomas Nagel), one huge problem with this view is that "all materialist theories deny the reality of the mind," though they're usually not explicit about this point, possibly because very few normal people would accept their conclusion if stated plainly.

Here's Thomas Nagel's view, which I agree with:

To insist on trying to explain the mind in terms of concepts and theories that have been devised exclusively to explain nonmental phenomena is, in view of the radically distinguishing characteristics of the mental, both intellectually backward and scientifically suicidal.
Well, so far all of this has focused on the flaws with materialism. But is this just a negative point, or is there some positive, viable alternative?

I think so, but it requires accepting the fact that we probably don't have a satisfying theory yet. There's That's no reason to assume we'll never have such a theory. [UPDATE: I changed it from "There's" to "That's" because I realized I didn't want to make such a firm statement. Colin McGinn argues that, indeed, we'll never have a good theory.]

Here's Nagel's extended argument to this effect (this is all from chapter 2 of The View from Nowhere (1986), which is one of the best philosophy books I've ever read):

1. "The shift from the universe of Newton to the universe of Maxwell required the development of a whole new set of concepts and theories.... This was not merely the complex application, as in molecular biology, of fundamental principles already known independently. Molecular biology does not depend on new ultimate principles or concepts of physics or chemistry, like the concept of field. Electrodynamics did."

2. Even if these new, disparate concepts have been "superseded by a deeper unity,"* we wouldn't have been able to discover that "deeper unity" in the first place "if everyone had insisted that it must be possible to account for any physical phenomenon by using concepts that are adequate to explain the behavior of planets, billiard balls, gases, and liquids. An insistence on identifying the real with the mechanical would have been a hopeless obstacle to progress, since mechanics is only one form of understanding, appropriate to a certain limited though pervasive subject matter."

* Nagel suggests that this has actually happened; I don't know enough about the relevant science to have an opinion on that.

3. "The difference between mental and physical is far greater than the difference between electrical and mechanical."

4. If you believe that something can be "pervasive" but "limited," to use the words from point 2 -- and it's hard to see how anyone could deny this possibility -- then you should be open to the view that the physical isn't necessarily the only thing that's real, but rather is "only one form of understanding."

5. Given that it certainly seems like the world includes not just the physical but also the mental, "[w]e need entirely new intellectual tools, and it is precisely by reflection on what appears impossible -- like the generation of mind out of the recombination of matter -- that we will be forced to create such tools."

6. It's possible that if we go down this road and come up with a successful theory of the mind, we will not arrive at dualism, but will discover some sort of "deeper unity" of the mind and body. Nagel elaborates on this point:
In other words, if a psychological Maxwell devises a general theory of mind, he may make it possible for a psychological Einstein to follow with a theory that the mental and the physical are really the same. But this could happen only at the end of a process which began with the recognition that the mental is something completely different from the physical world as we have come to know it through a certain highly successful form of detached objective understanding. Only if the uniqueness of the mental is recognized will concepts and theories be devised especially for the purpose of understanding it. Otherwise there is a danger of futile reliance on concepts designed for other purposes, and indefinite postponement of any possibility of a unified understanding of mind and body."
I completely agree with Nagel on all this, and I try to keep it in mind anytime I read or hear overly confident materialist philosophers.