Are the footnotes distracting? Should I keep freely adding footnotes, or would you prefer to have them in parentheses, or should I just omit the footnote-like thoughts altogether?
If I do keep the footnotes, do you like them as they are now, or would they be better with Wikipedia-style links (i.e., you click the reference in the main text, which brings you down to the footnote itself, and then there's another link that bounces you back up to the main text)? Would 1, 2, 3, etc. be better than *, **, ***, etc.?
Let me know what you think in the comments.
UPDATE: I've put boomerang links on all the footnotes. Now that's more like it!*
MUCH LATER UPDATE: As you may have noticed, I mostly stopped using footnotes pretty soon after this post. I still use them occasionally, but it's more like every once in a while a post will have one footnote, unlike before, when posts would regularly have up to 5 footnotes.
* See what I mean?? [back]
Saturday, April 26, 2008
Are the footnotes distracting? Should I keep freely adding footnotes, or would you prefer to have them in parentheses, or should I just omit the footnote-like thoughts altogether?
Thursday, April 24, 2008
As promised, the following is my highly biased, incomplete, and unfair list of classic moments from the 2008 presidential primaries.
With a few exceptions, I’ve generally given just a quote with a link, but no attribution or names. If you like, you can use this as a test of your campaign knowledge by trying to identify the speaker before clicking the link!
Huge thanks to the Veracifier YouTube channel set up by Talking Points Memo, where I got a lot of these clips from.
Part of me envies those who were blogging when all this stuff was going on, but another part is glad I can just whip this together in one shot, to blatantly mix metaphors.
So, in no particular order, here they are:
1. “We even know how to talk about eating fried squirrel and stuff like that, so we’re on the same wavelength.”
2. "I'm not doing hand shows today."
3. "I'm looking forward to you advising me as well. I want to gather up talent from everywhere."
[blog + video]
4. "I was tied up at the time."
5. the Romney face scrunch
[video at 4:10]
6. "Here's the thing. After a couple minutes, I was feeling kind of fired up. And I was feeling like I was ready to go."
7. "I like to help old ladies across the street. Sometimes they don't want to be helped. It's terrible!"
[video, 1:20 - 2:20] + [blog]
8. "I ... opposed Iraq from the beginning ..."
9. "I just don't want to see us fall backwards!"
10. Huckabee shows the negative ad he's not going to show
11. Russert channels an Iraqi nationalist
[video, 3:30 - 4:45]
12. "I can no more disown him than I can disown my white grandmother."
[video at 13:45]
13. "I think that my daughters should probably be treated by any admissions officer as folks who are pretty advantaged."
14. "a noun, a verb, and 9/11"
[video, first few seconds]
15. "I am honored ... I am honored ... I am absolutely honored ..."
--> "Shame on you!"
[debate] --> [speech 2 days later]
16. “Maybe 100!”
17. “Ronald Reagan changed the trajectory of America ... in a way that Bill Clinton did not.”
18. “He has said in the last week that he really liked the ideas of the Republicans over the last 10 to 15 years.”
19. Romney debates separation of church and state with radio host during commercial break, and eventually tells him off
20. “Convinced the relationship had become romantic, some of his top advisers intervened to protect the candidate from himself ....”
21. "I want to make change, but I've already MADE change!!!"
(debate in between Iowa caucuses and New Hampshire primary)
[video at 0:35]
22. “Well, that hurts my feelings.”
23. “Who let the dogs out?!”
24. bug in hair
25. "We're going to win Florida."
26. Hillary Clinton on driver’s licenses for illegal immigrants
27. "She is a monster too. That is off the record."
28. "It's not surprising that they get bitter. They cling to guns or religion or antipathy towards people who aren't like them or anti-immigrant sentiment or anti-trade sentiment, as a way to explain their frustrations."
[audio clip at 0:25]
29. "Jesse Jackson won South Carolina twice ..."
30. "I just said some things that ... didn't jibe with what I knew to be the case."
31. "Anybody gone into Whole Foods lately and see what they charge for arugula?"
32. "We would be able to totally obliterate them."
33. McCain says he's "pleased" to have endorsement of pastor who said Katrina was retribution for gay pride parade
[video, 3:05 - 4:45]
34. Hillary Clinton endorses McCain and herself for Commander-in-Chief
35. "It would be OK to repeal [Roe v. Wade]."
36. "The Bush administration's arrogant bunker mentality has been counterproductive at home and abroad."
37. "Well, I'll talk to him later!"
38. "When I was a kid, I inhaled frequently. That was the point."
39. "... [he] was doing something in the neighborhood that I won't say what he was doing but he said it in his book ..."
40. "He said, 'Katrina! Katrina!' So I then gave my answer on Katrina."
[video, 9:20 - 10:45]
41. "It's very important that we have a president who is mindful of the cruelty that is perpetrated on animals."
42. Hillary Clinton refuses to state a policy on Social Security
[video, 8:15 - 9:10]
43. Edwards says Obama and Clinton have both assured him they'll carry on his fight against poverty
[video at 5:00]
44. “Sometimes, just sometimes, there are nights like this, a night that years from now ... you'll be able to look back with pride and say that this was the moment when it all began.”
[video at 8:50]
Wednesday, April 23, 2008
I didn't set out to blog about grand themes, but that's how it's been shaping up. I can't imagine sustaining a rate of a post every couple days, always on a topic of life-or-death importance. At some point I'll either have to slow down the pace or lighten up the subject matter. But for now, here's another post on life, death, etc.
Having recently graduated from law school to embark on a career, I figured it's an opportune time for me to read up on managing one's personal finances. The consensus seems to be that Your Money or Your Life is the book to get, at least for the big picture of how your money should fit into your life. (Go here for the official "detailed summary" of the whole book.)
So I'm reading it. I'm in no position to critique the book as a whole -- I'm just a couple chapters in. But I do have a couple problems with it. Not only do the authors, Joe Dominguez and Vicki Robin (hereinafter YMOYL), set out to frame your life as negatively as possible (which seems excessive!), but they also botch some basic facts.
They try to break down the amount of time you have left to live. If you're an average 40-year-old, you have 37 years left to live, which is 329,601 hours, which they call "life energy." That's all fine with me. Moving on: "Assuming about half of your time is spent on necessary body maintenance -- sleeping, eating, eliminating, washing and exercising ..." Again, still sounds reasonable -- I have no problem with that assumption. But they go on: "... you have 164,800 hours of life energy" -- that is, only half of the rest of your life -- "remaining for such discretionary uses as:
• your relationship to yourselfYou see the problem, right? Now, this is from the revised edition of the book, so no one pointed this out to them in seven years. Assuming they're right that 50% of your time is spent on body maintenance, it doesn't follow that you only have 50% of your time available for things other than body maintenance. You can eat food ... while developing a relationship with someone (or yourself!). You can go for a walk or a run (which, as he says, is body maintenance because it's "exercise") ... while trying to achieve inner peace. You can think creatively in the shower. If you're lucky, you can write a song in your sleep.
• your relationship to others
• your creative expression
• your contribution to your community
• your contribution to the world
• achieving inner peace and ...
• holding down a job." (pp. 55-6)*
The bottom line is they've implicitly eliminated "half of your time" from what counts as your real "life." I don't think that's a minor technicality! As I said, I'm still expecting to get a lot out of this book ... but I have to question whether someone who makes such a consequential calculation error is the person I should be taking advice from about how to make a budget.
This got me thinking about Thomas Nagel's essay Death, from his book Mortal Questions. He asks whether death is really so bad (assuming there's no afterlife).**
Here's Nagel's argument in a nutshell: If death is bad, it is “bad not because of any positive features but because of the desirability of what it removes.” His proof of this is that we consider it unfortunate to have a relatively short life, while no one thinks it's unfortunate to be dead for a relatively long period of time. No one would say that Haydn is less fortunate than Schubert because Haydn died in 1809 and has thus been dead for longer than Schubert, who died in 1828.
OK, so death is bad because it takes away life, and life is good. Well, if that's what's bad about death, then death is always bad, because death always takes away your chance at living more life, even if you live to be 100. (This is still Nagel's argument, not mine.) The fact that it's normal to die before you reach 100 should be no consolation: if everyone died in agony, that would plainly be bad even though it would be normal. Death at a ripe old age is really "just a more widespread tragedy" than dying young. And here's the last sentence of the essay: "If there is no limit to the amount of life that it would be good to have, then it may be that a bad end is in store for us all." That's it! No uplift!
Well, that's about the coldest, most impersonal philosophy of life (let alone death) that I've ever seen. Isn't a whole huge dimension missing from this? Life isn't just some constant that you either have more or less of. It's not just quantity -- it's quality too.*** To point out the obvious, people have different attitudes toward different stages of life. When you expect that something will probably happen (death at an old age at the end of a full life), you're well prepared to accept it. Acceptance is a reasonably good feeling, and feeling good is good. I don't think any further argument is needed to justify a feeling of acceptance (unless it's actively causing harm somehow, e.g. accepting the smell of a gas leak). It may be an arbitrary fact about the world that death at 80 is normal, whereas death at 40 is shocking. But once we take that arbitrary fact as a given, it's not arbitrary to structure our expectations around it.
But that's just what I think, and I'm just a blogger, so what I say has no credibility because it hasn't been checked. If you want to know what the experts think -- the ones who have credibility because their words have been printed on dead trees -- it's: you only have half as much potentially enjoyable life as you thought you had, and ... it will end tragically.
But it's even worse than that. I don't have the space to list all the ways YMOYL tells you your life is not as good as you thought. But suffice it to say that they seem to write off most of your time spent at work, as well as any time running errands. (I'm sure that later on in the book they'll give examples of people who enjoy their jobs, but that's certainly not the picture they've painted so far.) When you add all this up -- or I guess I should say, subtract all this down (?!) -- you really don't have much life left that's enjoyable rather than drudgery. But, as we learn from Nagel, it's all over much too quickly.
Now, aren't Nagel and YMOYL**** both making the same mistake? They both seem to be assuming that experience -- or, in other words, living -- is this fixed thing that's just sitting there, waiting to be objectively analyzed, assessed, weighed. But really, given a particular experience (say, working at your current job), you have tons of flexibility in how you experience it. No matter what you do, you can certainly choose to experience it as meaningless drudgery. (Incidentally, the next essay in Nagel's book is about whether we should see all our endeavors as meaningless.) But unless you have a really low-quality job (I'm thinking coal miner here), it's a pretty good bet that you can decide to just go ahead and be energized by your work, feel a sense of "drive" and "mission."***** OK, I'm making that sound a lot easier than it is, I know. But so far (again, just a couple chapters in), YMOYL makes it sound impossible. And if you can even think about shifting to a more positive mindset for your job, you can certainly do it for walking to the grocery store and buying food to bring home and cook dinner with (all of which the authors would apparently count as wasted time).
I may have an update later, once I've read and absorbed all of YMOYL. But it's 9 steps, and they look like they might take me a while.
* Bullet points and ellipsis in original. [back]
** In addition to excluding the possibility of an afterlife, he also excludes the concerns of people (or things) aside from the person who dies. In other words, he's just asking whether death is subjectively bad for the person who dies, not whether it's an objectively bad thing for the world as a whole. [back]
*** If I'm right about that point, then this has huge implications for a lot of controversial issues. [back]
**** Sorry for the lack of parallelism, but YMOYL is a co-authored book. What was I supposed to say -- "Nagel and Dominguez and Robin both make the same mistake"?! [back]
***** On that topic, I have to eventually read Flow. Without having read it yet, I wouldn't hesitate give it my completely unqualified -- in both senses of the word -- recommendation. [back]
Monday, April 21, 2008
Continuing with Bertrand Russell's chapter on Descartes in The History of Western Philosophy, and also moving on to the other great rationalists Spinoza and Leibniz...
As the heading says, I've been looking for two-sentence refutations of profoundly influential ideas. This little project stems from my visceral revulsion at the debate trick in which people -- not in everyday conversation, but professors or other public experts -- respond to ideas they disagree with by saying, "Well, the problems with that are well-known, but there's no time to explain all that now." No! If you think that some position that's on the table is seriously mistaken, you should want to convince people of this as efficiently as possible. If you can't explain it right now, you can't expect anyone to believe you based on those mysterious arguments behind the curtain.
Russell is really good at avoiding this problem; here are three of his two-sentence refutations:
The word 'I' is really illegitimate; he ought to state his ultimate premise in the form 'there are thoughts.' The word 'I' is grammatically convenient but does not describe a datum.(This is a well-worn objection. Russell may have been cribbing from William James, who wrote that we should say, in the third person, "It's thinking," just as we say, "It's raining," so that we don't make Descartes's mistake!)
I cannot accept this; I think that particular events are what they are, and do not become different by absorption into a whole. Each act of cruelty is eternally a part of the universe; nothing that happens later can make that act good rather than bad, or can confer perfection on the whole of which it is a part.
A Manichaean might retort that this is the worst of all possible worlds, in which the good things that exist serve only to heighten the evils. The world, he might say, was created by a wicked demiurge, who allowed free will, which is good, in order to make sure of sin, which is bad, and of which the evil outweighs the good of free will.Since those two sentences give you the gist of the argument, I don't think it'd be breaking my two-sentence limit to add his next sentence as elaboration:
The demiurge, he might continue, created some virtuous men, in order that they might be punished by the wicked; for the punishment of the virtuous is so great an evil that it makes the world worse than if no good men existed.Back to refutation #2 (cruel acts aren't transformed into good by being absorbed into the whole universe): I absolutely agree with this, and I hope it shapes my worldview. It's probably a big part of why I'm so indifferent to religion.
I do not take the view, which many secularists take, that cruelty and suffering are just there and don't have any larger meaning in the grand scheme of things. I don't have any more interest in an "It's all meaningless" view than in an "It's all for the best" view. What I think is that even if there's some sort of cosmic significance to everything in the world, the suffering is still there, and you can't rationalize it away. The fact that X hurts someone is, on the face of it, a reason to conclude: X is bad.
This explains the overwhelming instinct, cutting across political lines, that torture is just wrong, period. Even those who argue for an exception to society's general "don't torture people" rule tend to rely on scenarios where the suffering caused by torture is far outweighed by preventing others from suffering. This still implies that suffering itself is the basic unit that we're looking at in making moral assessments: we want the least possible of it! So people are quibbling over a very narrow exception -- maybe an important exception, but not one that calls into question the fundamental "torture is bad" consensus.
And so, no one takes the position: "Hey, go ahead and torture as much as you like! What, does that make you queasy? Don't worry! It's sure to be a net plus in the end -- it'll be a learning experience, or it will be a ringing affirmation of our own free will, or something." Well ... no one applies this to human beings. But it's regularly applied to God. The fact that God is held to lower moral standards than humans are is ... interesting.
Turning to #3 from the list: It's a commonplace to ridicule Leibniz's theory that God has ensured that we live in "the best of all possible worlds." I mean, Voltaire made fun of it in his novel Candide, so it must be wrong. I have the sense that people will balk at the "best of all possible worlds" idea when phrased like that, but if you phrase it more gently, e.g. "Everything works out for the best," it's still hugely influential.
OK, so the above Leibniz and Spinoza theories are closely related. You could group both of them under "It's all for the best." That's the basic thrust. Well, there's one oddity about this kind of outlook that I don't understand:
If it is true that suffering is justified in the long run by our ability to learn from it, or because this follows from our having free will (since free will, which is a precondition for virtue, entails the freedom to hurt people) ... and if you don't believe that animals operate at such a sophisticated level ... then doesn't this mean that the uniquely human ability to remember and reflect on pain weighs in favor of treating animal pain as more of a cause for concern than human pain?
Sorry to cram so much into one sentence there. But you see what I'm getting at, right?
In just about any debate over the moral status of animals that I've ever seen, a key point is always: "Well, how about the capacity to feel pain? Isn't that morally significant, and don't we share it with animals?"
The response is then going to be: "Hold on, there's a big distinction between humans and animals. We might all -- humans and animals -- be able to feel the initial stabs of pain. But only humans can intellectually reflect on that experience later on."
Well, that really seems to mitigate the suffering of humans. Meanwhile, animals are left merely having suffered without gaining anything from it.
That's all disingenuous for me to say! Because I don't necessarily accept those premises. As I said, I don't believe in justifying human suffering through cosmic mitigating factors. I'm just saying that if you do, you should follow your view to its logical consequences.
At the risk of loading the issue: if Anne Frank's poignant conviction in the underlying goodness of humanity can somehow mitigate the horror of the Holocaust, then that should decrease our concern for the mass killing of humans relative to the mass killing of animals. Of course, other factors might still support caring more about humans than animals. But to the extent that you rely on Spinoza/Leibniz-style justifications of human suffering, this weighs on the other side of the scale.
As always, please explain in the comments if I've gone horribly wrong in my thinking here.
* In my "modern philosophy" course in college, we enjoyed the part where we got to Spinoza and all of the sudden it became like we were studying a self-help book. [back]
Saturday, April 19, 2008
Another commonplace blog post, this time on Bertrand Russell's History of Western Philosophy. I think this book is the only instance of a unified, comprehensive overview of Western philosophy (as opposed to, say, lecture notes) in which one of the main chapters has to be in first person. So I figure anyone who's interested in philosophy ought to read it, even though everyone says it's not very good.
I'm starting the book at -- where else? -- page 557. Because that's the first page of the chapter on Descartes, and he was the original modern philosopher.
Since I was a philosophy major, I'm not necessarily interested in rehashing the same theories I studied in college. So I'm going to freely gloss over the serious stuff and just pick out a few tidbits (which is not to say that the tidbits won't be serious).
I love oddball anecdotes about the greats, even if they're obviously apocryphal. Case in point: Descartes crawled into a stove to warm himself up, and "his philosophy was half finished when he came out." Russell helpfully notes that many commentators are skeptical of this story -- but experts in 17th-century Bavarian houses maintain that it's entirely possible. Regardless, Descartes needed to be warmed up, unlike Socrates, who would stay out in the snow all day, thinking.
Descartes had a ridiculous demise, but it wasn't from hanging out in ovens. He put a huge amount of effort into courting the queen of Sweden, Queen Christina. (At this point, he was in his mid 50s and never married.) He went out of his way to write treatises on love and "the passions of the soul" (not his usual topics) so he could send them to her. And it seemed to work: in September 1649, she sent a ship to come get him in Holland and bring him to Stockholm so he could give her "daily lessons." The only problem was that she could only fit this into her busy schedule if the lessons were at 5 in the morning, which meant Descartes would need to get up even earlier than 5 to go to her castle. This was very hard on him, especially when the "Scandinavian winter" came around. He apparently felt compelled to go along with it, but the routine made him sick. He died in February 1650, that same winter.
But again, that's all ridiculous. Let's move on to what matters: his theory of knowledge. (Disclaimer: this turned out to be a looong post, and the rest of it might not interest you if you're not a philosophy person.) Russell says the same thing about Descartes's theory that a professor of mine said about Hume's epistemology (theory of knowledge): the destructive part is great, but no one is convinced by the positive part where he actually attempted to explain how we know things, rather than just tear down people's previous misconceptions. I think that's understandable. It's easier to read books and say to yourself, "This guy doesn't know what he's talking about," than to know what you're talking about. If Descartes and Hume successfully showed that no one before them knew what they were talking about, but didn't themselves know what they were talking about ... well, that's good enough for me.
A common response to Descartes by contemporary readers, though, is that his failure to construct a satisfactory edifice after demolishing the one that came before him is evidence that he didn't really mean what he was saying. After all, "he starts out in his Meditations talking about how we need to be skeptical of all our beliefs and use that skepticism as the basis for rational thought, but then he completely strays from this method by just assuming that God exists and doesn't mislead us!" Actually, no, he doesn't assume God exists; he makes a logical argument that God exists. It's a bad argument, but an argument nonetheless, so you can't say he's abandoned the very idea of rational inquiry.
But he was such a genius that he must have known he was making a bad argument, and that the logical conclusion of his method of "doubt" would be the denial of God and the downfall of religion, right? Well, it's not possible to read his mind, but I'm inclined to agree with Russell that Descartes was probably just "a sincere Catholic, and wished to persuade the Church -- in its own interest as well as in his -- to be less hostile to modern science than it showed itself in the case of Galileo." Whether or not you believe that religious claims pass rational scrutiny, the fact is that a lot of geniuses have been genuinely religious, and Descartes may very well have been one of them. As Charles Taylor points out in Sources of the Self, we liberal elites in the current day tend to associate "rational" with "irreligious," but it's anachronistic to apply that assumption to a 17th-century thinker (see the bottom of page 151). Yes, his theological arguments were contrived and not up to his own intellectual standards -- but smart people can convince themselves of dumb things.
In another painful attempt to lend rational support to entrenched views, Descartes's disciple Geulincx had a "two clocks" solution to the mind-body problem, which Descartes adopted. This was supposed to vindicate the Christian idea of free will (bad acts are the fault of humans, not God, and they deserve to be punished) by explaining away the apparent causal connection between mind and body. The idea was: the mind and body are like two clocks that each keep perfect time. It would be mistaken to think that one caused the other to strike at the right times. They're both real things unto themselves that operate independently of each other but just happen to indicate the same thing at the same time, just as your body and mind will each indicate "thirst" at the same time when you're thirsty. I love this: "There were, of course, difficulties with this theory. In the first place, it was very odd." Thanks for the insight, Russell. More substantively, the theory was supposed to reconcile the existence of mind and body with free will, but it suggests that the mind, along with the body, is enslaved to some external set of rules, just as the clocks are enslaved to time. If the mind isn't autonomous but is strictly caused by some outside force, then -- sorry, compatibilists! -- we don't have free will.
So, that's a good lesson in how not to solve the mind-body problem. Hey, I told you it's easier to knock theories down than to build up your own.
Back to Descartes's method of doubting your own mind's ability to give yourself accurate information. It always puzzled me in class when they'd illustrate this idea by invoking a twig in the water that deceptively appears to be bent. This was supposed to be the quintessential example of how unreliable our perception is. But it doesn't seem to work. You don't look at the twig and mistakenly think it has a bent shape. You look at it and think, "Oh, the water is distorting its shape -- it's probably a straight twig, or maybe a twig that's bent at a less severe angle than it looks." I guess babies might think the twig was really bent, but they'll grow out of that phase pretty quickly -- isn't that good enough?! Russell gives the example of paintings that depict things that aren't really there -- same problem.
I know: those specific examples are just bad examples, but we now know that everything is made of microscopic particles floating around in space, and we definitely don't see that with our own unaided eyes. That's true, of course, but it's telling that even that example is still not an instance of "seeing" something that isn't really there; it's just a failure to see things at a certain level. At the level at which we do see things, we do a pretty accurate job of seeing what's really there. (If you really doubt the proposition that what we see really exists, you're free to test it out by banging your head into the wall.) This isn't like a hallucination -- it's more like not being able to read fine print or see objects off in the distance. That doesn't call into question your basic faculties of perception -- it just shows that some things are more immediately available to be perceived than others.
But what about actual hallucinations, huh? Yes, of course, sometimes people trip on acid and believe things are there when they really aren't. But philosophers will make an implausible stretch from saying that some people sometimes hallucinate, to saying there's a problem about whether you're always hallucinating.* Well, in theory there is this problem. But fortunately, it's not a real problem in practice. I might be able to conceive of someone having the exact sensation I'm having now (writing in a Moleskine at Quack's, or typing it up later in my apartment), but I'm sure enough that it's real not to need to worry about this particular "problem." The sure-enough-ness comes from having spent enough time on earth interacting with things and seeing them behave according to reasonably predictable laws. You might have the occasional dream that feels real, so if we choose to focus on that part of your life, then yeah, your perceptions don't seem accurate. But once you wake up, there's not really any doubt about what's real and what's not. The world you live in has simply been around too long, looking and feeling the way it does, for you to seriously contend (as opposed to hypothesize for the sake of a philosophical argument) that it might be just a hallucination or a dream. I can grant the theoretical possibility that even this is mistaken, but file it under "not probable enough for it to dramatically affect how I think about the world."
In other words, I agree with what John Searle says in an interview in What Philosophers Think: that skepticism about the existence of the-real-world-as-we-know-it is like Zeno's Paradox: an intriguing, mind-bending puzzle that smart people will mull over but then quickly move on from, to focus on more important philosophical problems. You don't let Zeno's Paradox reshape your whole view of what philosophers do -- they're not on a mission to explain how there can be motion. But that seems to be roughly what's happened with analytic philosophy,** thanks largely to Descartes. (Thus, my philosophy professor felt the need to qualify the steps of an argument with, "Assuming you believe that tables and chairs really exist ...")
This is one problem with studying philosophy: you're constantly told that you need to see certain things as problems. But they're not "problems" like "How do we fix the health care system?" or "How do we reduce crime?" In other words, they're not things that a normal person who's completely unfamiliar with the field would perceive as problems in need of solutions.
Of course, you could find problems in other fields that wouldn't be understood on their face as problems because they're laden with jargon or esoteric concepts. If these are real problems, though, they can at least be "understood" insofar as an expert can patiently explain the goal to a layperson: "It's important for us to figure out ____ because it could help us find a cure for such-and-such a disease," or whatever it does.
Even after spending hours and hours studying the philosophy of language (to take another example), I'd be hard-pressed to make the case that it's important for anyone to devote their life to explaining how it is that we can mean things through words. If you're like 99+% of humankind, you just accept that we do this, and move on with your life. And it seems pretty clear that if there's an option -- a perfectly feasible, easy option -- of just saying, "Oh well!" and moving on with your life ... and if this isn't a mere luxury enjoyed by some of the people while other people have to worry about it, but in fact the world would be just fine if no one worried about it ... then it's just not much of a "problem" at all.
That's my anti-philosophy philosophy.
* Note: I'm merely saying they argue that there is a problem about how we can have accurate perceptions, not that they argue that "we don't really know anything." Also, I know it's foolish to attribute anything to "philosophers" in general, but I think I'm describing something that's been pretty prevalent among analytic philosophers in the past few centuries. [back]
** Again, I know I'm painting with a broad brush. For instance, Searle himself is confident we've moved on. I don't have a nuanced enough knowledge of the current trends in philosophy to be able to evaluate that. [back]
Thursday, April 17, 2008
Since I'm still in the going-through-old-journal-entries phase of getting this blog started, here are a couple vignettes from old Moleskines:
1. In a cafe in Madison -- Mom’s side of phone conversation with Chris:
I’m at a cafe. But I don’t have any coffee. I have water. I ordered … water. In a bottle that they shipped here all the way from Italy. Do you think that was worth doing?
2. In a cafe/restaurant in Austin --
I just got a free miniature apple. Thanks for brightening my day, waitress!
She told me it was only worth 20 cents, and you can't sell something for 20 cents. Maybe this is a message to seek out the little things ... you know, those ones.
Wednesday, April 16, 2008
Before I got into the journaling that led to this blog, I thought about doing a blog-as-commonplace-book. Like the idea of a typed-up diary, though, I realized that a blog version of the underlining and marginalia in my books would be too cramped and fussy. But I still want the blog to have some of the marginalia concept. So without further ado...
I've been reading Robert Wright's The Moral Animal -- about how evolution shapes human behavior. As with many books, I set it aside when I was midway through it, but I plan to finish it eventually.
And, well, it's changed how I think about people! One thought that especially made an impression on me: Explaining human behavior as the result of natural selection doesn't mean justifying the behavior. This seems so obvious to me now that it almost doesn't even seem worth pointing out, but I don't know if I had realized it before reading this book. And this is what really got me:
we're all puppets, and our best hope for even partial liberation is to try to decipher the logic of the puppeteer.And he goes even further:
Just because natural selection created us doesn't mean we have to slavishly follow its peculiar agenda. (If anything, we might be tempted to spite it for all the ridiculous baggage it's saddled us with.)*People tend to assume that if want to effect social change, you need to somehow show that nature is on your side. Thus, if you're for gay rights, you need to argue that homosexuality is inborn, not a choice.
But the problem is that we don't know that. To my knowledge, we haven't solved the mystery of homosexuality. It doesn't seem to fit very well with natural selection: why haven't gays died out as a result of their distaste for procreation? Wright (a liberal and a supporter of gay rights) raises that question and admits the answer is unclear.**
The reason we respect gays isn't that they have a well-defined place in the natural order of things. We simply respect them because they're not doing anything wrong.
I wish everyone could agree to stop equating "nature" with good, and instead adopt the view that, "Look, of course the world is a terrible place. There are huge problems intrinsic to the world itself. Some of them might be fundamental defects in human nature" -- in this case, aversion to homosexuality, distrust of outside-the-mainstream behavior, etc. -- "and we should try to solve them using human ingenuity. Those solutions might just as well come from rebelling against nature or tradition rather than returning to nature or tradition."
But it's hard to make this kind of argument and win over many people. One problem is religion: if you believe that God is good and is the creator of the natural order, then the natural order must be good. Maybe that's why we're going to keep getting sidetracked by "issues" that shouldn't be issues, like "Is homosexuality a choice?"
Speaking of human tendencies that are natural but evil, I also want to highlight what Wright says about rape on pages 52-53 -- and, relatedly, what he says about tall men -- but that will have to wait till later.
On the negative side, there is one little issue that I was disappointed to see Wright failed to address.*** He makes the familiar point that male animals typically have bright colors or other features designed to attract women. This is because females are "choosier" than males when it comes to sex,**** so their preferences are more influential than males' on which traits get passed down to future generations.
But this is the opposite of what we observe in humans. Women are the ones who wear visible makeup, not men. Women have much more leeway to wear clothes with bright colors and ornate patterns. [UPDATE: This might have been too simplistic if we're talking about human beings in general rather than merely our own culture and era. See the comments. Also, this intro to one of Wright's diavlogs suggests that he himself may be an exception to the rule.] Throughout the book he explains how human traits and behavior parallel those observed in animals, but then there's this one discrepancy that seems to contradict how you'd expect the sexes to behave based on natural selection.
If men are the ones who want to have as much sex as possible (because that will maximize how many of their genes get passed on), then what's the point in women getting all dolled up?
If anyone knows the explanation for this, or has a guess, please let me know in the comments. I can't be the first person to notice this. (I tried Googling for it, but that didn't work.) Maybe it's just one of those "I'm not going to point this out because it would contradict the whole theory of this book" things. That's a big problem with books.
So, apparently this is going to be a blog with footnotes. I didn't plan that -- it just happened. I'll try to cut down on them in the future.
* This used to be a paraphrase, but I've now replaced it with the exact quote, thanks to this complete searchable text. I originally said I would delete this very footnote, but I'm going to leave it in so that I preserve this point: That's how I want to do this blog -- make it a constant work in progress in which I'm allowed to go back and revise old posts as long as I think it improves them, not being bound by the definition of a "blog" as something that's always in reverse-chronological order.
** He wrote the book in 1994, so it's possible that more recent research provides the answer. But something like this 2007 study offering various highly speculative theories suggests that not a lot of progress has been made since then. For example, one theory -- mentioned by both Wright and the linked study -- is that gays contribute to the survival of their own genes by caring for their family members. As Wright points out, it follows from this theory that we should be able to observe gay people being extraordinarily devoted to their nephews and nieces, far more so than heterosexuals. There doesn't seem to be any evidence that that's the case. (384-6)
*** Since I haven't read the whole book yet, I don't know if he addresses it in a section of the book I haven't gotten to yet. But I have reason to think that's not the case given the way the book is structured.
**** This is a huge theme of the book, and he certainly thinks it applies to humans as well as other animals.
Saturday, April 12, 2008
So, I've started a blog.
The backstory: I finally started keeping a regular journal, after years of telling myself I should but never doing it. I found it useful to type up my Moleskine notebooks later on — this made it feel more structured, and it gave me a permanent backup. So then I toyed with the idea of putting it online. (Paper is so transient — you need to put content on the internet if you want it to really endure.)
But that wouldn't work. I'd need to censor so much that's too personal or too boring for other people — the result would be just a desiccated version of a diary. On further reflection, I realized that the thing to do is to cull whatever useable material happens to be in the Moleskines and then freely add to it — links, elaboration (I usually write on my lunch break, so I'm pressed for time), etc. So that's what I'm going to do.
I want the blog to be personal. After all, it's currently invite-only, though that's likely to change as I get more comfortable with the idea of having a blog. [UPDATE: I made the blog public about a week later.] But there are a couple things that "personal" decidedly does not mean here.
Does it mean embarrassing confessions? No — as Summer Anne Burton recently pointed out in her manifesto, you don't need those to be perfectly open and honest with people. Well, then, does it mean invading other people's privacy instead? Of course not. I'm sure I'll occasionally described interactions with other people (maybe you), but I'll try to keep these anonymous. (If I do write anything that you feel is an invasion of privacy, please let me know so I can fix it.)
Does "personal" mean a blog that's obsessively self-centered and introspective, confirming all those critics of blogs as internet-powered narcissism? I certainly hope so. I'll do my best.
One topic I'm going to try to avoid is the 2008 presidential race. Now, I am planning one post that will be a look back at the primary (there isn't a real primary left anymore), but beyond that, I don't want to be posting about "crucial moments" or "important insights" as they come up in this race. To the extent that I want to share any 2008-related links, I'll do so on Facebook or as comments on other blogs.
I don't plan to stick to any particular schedule. I certainly won't come close to my mom's record of posting every day for over four years. I'm writing this in a big Moleskine notebook over brunch at Mother's, to be typed up later. I want this to be a leisurely, contemplative blog, not a "Here's what's going on this second" blog. There's probably a greater excess of content in the world right now than at any previous point in history. We have a glut of content but a dearth of thought. I'll try to correct the balance.
And as you can see from that statement, I'm also trying to avoid the false modesty that's rampant on the blogosphere. So many blogs pitch themselves as "random babblings" or "incoherent rants." Why all the self-deprecation? Anyway, I'll have none of that here! I either want to do this well or not do it all. Let's see how it goes . . .