Thursday, November 30, 2006


When Chomsky referred to a generalized explanation of critical theory, as in comparison to quantum mechanics or topology or whatever, I do not think he was in any way attempting to equate the two. There is a significant difference between generalization, in a Chomskian sense and your request to explain quantum mechanics to your grandmother, who I assuming does not have a deep grasp of physics, and therefore would be somewhat immune to detailed or explicit explanations (even if I were qualified to give them). I could perhaps offer a list of intermediate steps whereby anyone (your grandmother included) might obtain the knowledge necessary to understand quantum theory. That list might include such obvious items as a) read an introductory physics textbook, b) take a college class in advanced physics, c) read Einstein's Special Relativity paper, etc.

I can imagine your objection here taking the form that "All you are saying is that you cannot understand quantum physics until you have attained all of the knowledge necessary to understand it, and that is exactly what I am suggesting vis-a-vis Deleuze, et. al." Here is where the key differance lies. I could (at least in theory) read all of the critical texts and the original works by Deleuze, and all of the commentary on it, which would be the eqivalent of working though the a), b) and c) of whatever list you provided. Yet, there would still be no means by which to prove that I understood it. You see, unlike physics, which relates to observable and testable hypotheses about external reality, Deleuzian philosophy and its idiosyncratic stylization, can only be grasped from within the space of its own imaginings. There is no yardstick, like reality, against which to test my (mis)understanding of Deleuze. Perhaps some of his metaphors are particularly apt or compelling or helpful in organizing one's thoughts, but how could you ever hope to show that I do or do not understand it, any more than one could refute someone else's interpretation of the Bible or any other (self-referential) text. Ultimately, it would only ever come down to opinion and argument. Yet quantum physics can be tested and experiments have shown time and again that it has explanatory power. There is a correct understanding and an incorrect one and the difference is not arbitrary. That emphatically does not mean that there is a dogma or an orthodoxy about it. Rather that interpretations that yield predictive results are simply better than interpretations that yield garbage. That is the key difference. And there is no equivalent basis by which to judge Deleuzian or many other postmodern texts.

Firstly, I'm not sure that I see the point of discussing serious topics with someone who, by his own admission, knows very little about them. A Thousand Plateaus is arguably Deleuze and Guattari's most well-known and deeply studied work, but, in your own words: "Actually, I have not studied Deleuze’s concept of the rhizome in depth..." and "I am partially just guessing here..." and "I am not certain what Deleuze is getting at..." and "Not clear what Deleuze’s dualism of models is..." and "I don’t happen to know the precise meaning of those concepts..." I hope you see the pattern. The rhizome is arguably the most important concept advanced by Deleuze and Guattari, and I frankly don't see how you can claim to be so familiar with Deleuze without being familiar with his usage of the rhizome. So, what this indicates to me, just from all of the guessing and uncertainty you've admitted, is that you don't really know what he's talking about either. So, you'll understand if I have some reservations: you already admitted that you know nothing about Lacan and you've since admitted to being quite unclear on the very basics of Deleuze. Therefore, it's difficult to discuss, considering I assume that you, by your own admission being very fuzzy of Deleuze even in the most fundamental sense, take his ability and so-called genius as a matter of faith and not reasonable comprehension (and according to certain postmodern paradigms, i.e. "anti-realism", there's no reason why this is bad - I happen to disagree, but what's new about that?).

Secondly, just out of politeness, let me respond regarding this: "Indeed, it appears that I have made my case. Sokhal [sic], et. al. have failed to be scientific in their 'scientific' attack on postmodernism." Yet again, I'm afraid you're being dreadfully unclear. Sokal and friends have not by any means "failed" to be scientific, and you haven't really made your case convincingly, because you haven't really established why except by making the oh-so-tired craven call of "Golly, you didn't understand it right, guys! (Now, I don't really understand it either, but let me tell you all about it.)" Also, remember that Sokal wasn't just attacking these gentlemen on grounds of psuedoscientific claims and extremely apparent scientific illiteracy, but also on a more "philosophic" level, namely, the extreme lack of clarity by all of the authors mentioned. Now, it's always been the recourse of the disciples of fools and sophists to rely on that aforementioned claim of "you didn't understand because you're stupid", but I'm afraid that doesn't work for very long because it doesn't really contain any substance at all.

Frankly, I'd be happy to admit that thinkers and writers like Deleuze and Guattari are interesting, to say the least, and certainly present some interesting juxtapositions of term. But they certainly aren't scientific, by any measure, and, as a philosophy student myself, I'd argue against them being classified as philosophers, either. Where they belong is in an obscure branch of literature, as philosophists, if you will. It behooves the philosopher to be as clear as possible, not to obscure the message in a purposeful maze of pedantic imagery and semantic meaninglessness. Philosophy should not be given to flippant garrulity. It should never be given to purposeful obfuscation. Why it has taken this temporary turn is that anti-realistic philosophy it requires no discipline whatsoever, and that, I think, is part of its great appeal - namely, it can be given to such meaningless little terms like post-coherentist, etc. Deleuze and friends are nothing more than updated versions of the Greek sophists, and nothing more than French nihilists, or retro-Nietzscheans. I think Sokal and Bricmont's title was especially fitting, in that fashionable nonsense is exactly what gentlemen like Deleuze and Guattari produce.

Wednesday, November 29, 2006


(N)either either (n)or or and (n)either and (n)or both and either both or (n)either are valid forms of (a)logical discourse (in the abnegatory sense, of course).

by Rober Ebert

Michelangelo Antonioni's "Blowup" opened in America two months before I became a film critic, and colored my first years on the job with its lingering influence. It was the opening salvo of the emerging "film generation," which quickly lined up outside "Bonnie and Clyde," "Weekend," "Battle of Algiers," "Easy Rider" and "Five Easy Pieces." It was the highest-grossing art film to date, was picked as the best film of 1967 by the National Society of Film Critics, and got Oscar nominations for screenplay and direction. Today, you rarely hear it mentioned.

Young audiences aren't interested any more in a movie about a "trendy" London photographer who may or may not have witnessed a murder, who lives a life of cynicism and ennui, and who ends up in a park at dawn, watching college kids play tennis with an imaginary ball. The twentysomethings who bought tickets for "Blowup" are now focused on ironic, self-referential slasher movies. Americans flew to "swinging London" in the 1960s; today's Londoners pile onto the charter jets to Orlando.

Over three days recently, I revisited "Blowup" in a shot-by-shot analysis. Freed from the hype and fashion, it emerges as a great film, if not the one we thought we were seeing at the time. This was at the 1998 Virginia Festival of American Film in Charlottesville, which had "Cool" as its theme. The festival began with the emergence of the Beat Generation and advanced through Cassavetes to "Blowup"--after which the virus of Cool leaped from its nurturing subculture into millions of willing new hosts, and has colored our society ever since, right down to and manifestly including "South Park."

Watching "Blowup" once again, I took a few minutes to acclimate myself to the loopy psychedelic colors and the tendency of the hero to use words like "fab" ("Austin Powers" brilliantly lampoons the era). Then I found the spell of the movie settling around me. Antonioni uses the materials of a suspense thriller without the payoff. He places them within a London of heartless fashion photography, groupies, bored rock audiences, languid pot parties, and a hero whose dead soul is roused briefly by a challenge to his craftsmanship.

The movie stars David Hemmings, who became a 1960s icon after this performance as Thomas, a hot young photographer with a Beatles haircut, a Rolls convertible and "birds" hammering on his studio door for a chance to pose and put out for him. The depths of his spiritual hunger are suggested in three brief scenes involving a neighbor (Sarah Miles), who lives with a painter across the way. He looks at her as if she alone could heal his soul (and may have once done so), but she's not available. He spends his days in tightly scheduled photo shoots (the model Verushka plays herself, and there's a group shoot involving grotesque mod fashions), and his nights visiting flophouses to take pictures that might provide a nice contrast in his book of fashion photography.

Thomas wanders into a park and sees, at a distance, a man and a woman. Are they struggling? Playing? Flirting? He snaps a lot of photos. The woman (Vanessa Redgrave) runs after him. She desperately wants the film back. He refuses her. She tracks him to his studio, takes off her shirt, wants to seduce him and steal the film. He sends her away with the wrong roll. Then he blows up his photos, and in the film's brilliantly edited centerpiece, he discovers that he may have photographed a murder.

Antonioni cuts back and forth between the photos and the photographer--using closer shots and larger blowups, until we see arrangements of light and shadow, dots and blurs, that may show--what? He is interrupted by two girls who have been pestering him all day, and engages in wild sex play as they roll around in crumpled backdrop paper. Then his eyes return to his blowups, he curtly sends them away, he makes more prints, and in the grainy, almost abstract blowups it appears that the woman is looking toward some bushes, there is a gunman there, and perhaps in one photo we see the man lying on the ground. Perhaps not.

Thomas returns to the park, and does actually see the man lying dead on the ground. Curiously, many writers say the photographer is not sure if he sees a body, but he is. What's unclear is whether he witnessed a murder. The audience understandably shares his interpretation of the photos, but another scenario is plausible: Redgrave wanted the photos because she was having an adulterous affair, her gray-haired lover dropped dead, she fled the park in a panic, and his body by the next morning had simply been discovered and removed. (The possibility of a scandalous affair plays off the Profumo scandal, in which a cabinet minister was linked to a call girl; the analysis of the photographs recalls the obsession with the Zapruder film.)

Whether there was a murder isn't the point. The film is about a character mired in ennui and distaste, who is roused by his photographs into something approaching passion. As Thomas moves between his darkroom and the blowups, we recognize the bliss of an artist lost in what behaviorists call the Process; he is not thinking now about money, ambition or his own nasty personality defects, but is lost in his craft. His mind, hands and imagination work in rhythmic sync. He is happy.

Later, all his gains are taken back. The body and the photographs disappear. So does Redgrave. (There is an uncanny scene where he sees her standing outside a club, and then she turns and takes a few steps and simply disappears into thin air. At Virginia, we ran the sequence a frame at a time and could not discover the method of her disappearance; presumably she steps into a doorway, but we watched her legs, and they seemed somehow to attach themselves to another body.)

In the famous final sequence, back in the park, Thomas encounters university students who were in the film's first scene. (These figures were described as "white-faced clowns" in Pauline Kael's pan of the film, but a British audience would have known they were participating in the ritual known as "rag," in which students dress up and roar around town raising money for charity.) They play tennis with an imaginary ball. The photographer pretends he can see the ball. We hear the sounds of tennis on the soundtrack. Then the photographer wanders away across the grass and, from one frame to the next, disappears--like the corpse.

Antonioni has described the disappearance of his hero as his "signature." It reminds us too of Shakespeare's Prospero, whose actors "were all spirits, and are melted into air." "Blowup" audaciously involves us in a plot that promises the solution to a mystery, and leaves us lacking even its players.

There were of course obvious reasons for the film's great initial success. It became notorious for the orgy scene involving the groupies; it was whispered that one could actually see pubic hair (this was only seven years after similar breathless rumors about Janet Leigh's breasts in "Psycho"). The decadent milieu was enormously attractive at the time. Parts of the film have flip-flopped in meaning. Much was made of the nudity in 1967, but the photographer's cruelty toward his models was not commented on; today, the sex seems tame, and what makes the audience gasp is the hero's contempt for women.
by Gerald Peary

Make one for them (a genre film with mass appeal), then one for yourself (something small and personal) is Martin Scorsese's famous credo of how to maneuver in Hollywood. His model could be Francis Ford Coppola in the early 1970s, sandwiching The Conversation (1974)--downbeat, moody, eerily atmospheric, neo-European in its sensibility--between his mammoth popular masterpiece, The Godfather (1972) and The Godfather, Part II (1974).

When The Conversation first appeared in theatres, critics assumed that Coppola was making a topical film, as they compared the story of protaganist, Harry Caul, wire-tapper and surveillance man, to the then-daily headline sagas of the Watergate burglars. Gene Hackman's description of his subterranean character--"Uptight, right-wing, eccentric, secretive..."-- was applicable to the H. Howard Hunt/G.Gordon Liddy plumbers. Couldn't Caul easily have been one of the them?

In truth, Coppola was as surprised as anybody by the revelations of Woodward-Bernstein and the Watergate Commission. Resemblances to All the Presidents Men, though uncanny, were totally coincidental. Principal shooting on The Conversation had begun November 26, 1972, and ended days before the March 19, 1973 Watergate break-in, months before what happened there became known to the public. Coppola had been developing his story since 1967, completed a screenplay in 1969, and wished to film it, with Marlon Brando in the lead, prior to The Godfather. "I never meant it to be so relevant," he said. "I almost think the picture would have been better received if Watergate had not happened."

The germ of The Conversation was a 1966 conversation with fellow director, Irvin Kershner. Coppola recalled, "We were talking about eavesdropping and bugging, and he told me about some long-distance microphones that could overhear what people were saying." Kershner sent Coppola an article about a sound-surveillance expert named Hal Lipset. Coppola was smitten. "I was fascinated to learn that bugging was a profession, not just some private cop going out and eavesdropping with primitive equipment."

A bugger's tale resonated, more importantly, with the private Coppola, who, as a techno-obsessed child growing up in Long Island, had discovered, he said, "a tremendous sense of power in putting microphones around to hear other people." He wrote quickly, and autobiographically. Harry, as he, a Catholic New York tranplant to San Francisco, would reveal in a nightmare what was Coppola's own forlorn boyhood: bed-ridden with polio. Coppola: "Somewhere along the way he must have been one of those kids who's sort of weirdo in high school. You know, the kind of technical freak who's president of the radio club. When I was a kid I was one of those guys like I was describing."

The Conversation is a story which aptly can be described as "Kafkaesque," and guilt-ridden Harry being summoned before The Director (Robert Duvall, uncredited) feels much like a weird day in the life of The Trial's Joseph K. Also, Harry was based on the alienated, misanthropic hero of Herman Hesse's Steppenwolf, the cult novel which was a favorite literary work of Coppola. The last name Caul? Seemingly, it resulted from a typing error by a secretary--"Caul" for "Call"--and the screenwriter/filmmaker kept it that way.

Fittingly, Coppola cast Gene Hackman, after The French Connection, as the disappearing-in-a-crowd, Harry. "Hackman is ideal for the part," Coppola waxed enthusiastic, "because he's so ordinary, so unexceptional in appearance."

During the actual filming, in downtown San Francisco, the relationship cooled between director and star. Coppola couldn't understand why Hackman seemed aloof, why (according to one source) he didn't wish to participate in lunch-time volleyball games. In turn, Hackman, complained about the acute problems in becoming Harry Caul. "It's a depressing and difficult part to play because it's low-key. The minute you start having fun with it, You know you're out of character."

It was an immensely difficult on-location shoot, including the unceremonious firing at one point of cinematographer Haskell Wexler. As filming dragged on, weeks longer than originally budgeted for, an exhausted Coppola came close to a breakdown. He'd barely looked up to notice that a deadline was fast approaching from Paramount to finish the contracted script of The Godfather, Part II; and he was obliged to commit full-time to its production. What to do about The Conversation?

Bring in Walter Murch, the brilliant picture-and-sound editor. "Since I was working on The Godfather, Part II, I asked Walter to edit the film," Coppola said.

Murch was hired to make a movie out of what had been shot, much of it open-ended and confusing. With Coppola advising on some weekends, Murch went to work, in a famously creative year in the editing room. "The material wasn't paced out, it wasn't itemized in the script," Murch has said, succinctly. "Shots were shot, and I structured them."

Murch chose to foreground the obvious parallels in The Conversation to Antonioni's Blowup (1966), in which a photographer, instead of Coppola's surveillance expert, accidentally eavesdrops on a possible murder. In choosing to open with a camera coming slowly down on San Francisco, and with paranoid-inducing shots of blood, a toilet, a shower curtain, Murch created a mini-homage to Psycho. He reversed the order of the narrative in some scenes and, in his most controversial move (Coppola-approved), Murch re-recorded the key sentence in which two characters discuss a potential killing so that, when the scene is shown a final time, a telling different word than before is emphasized.

A cheat? "It's tricky, but we were desperate men," said Murch. "I am eighty-five percent sure that it was the right thing to do."

Coppola blamed a faulty advertising campaign, but The Conversation was a box-office disappointment. However, it won the Grand Prix at the 1974 Cannes Film Festival and was Academy Award nomination for Best Picture and Best Screenplay. It remains a strong cult favorite, and Peter Cowie, Coppola biographer, asserts, "No more intense film exists in the Coppola canon."

(from American Movie Classics Magazine, Fall 2000)

Tuesday, November 28, 2006

by Brenda Austin-Smith

"I don't have anything personal," says Harry Caul (Gene Hackman), protagonist of The Conversation to his landlady, ".nothing of value, except my keys." The comment, made over the telephone rather than face-to-face, confirms Harry Caul as a character pathologically obsessed with his own privacy, even as he spends his days as a wiretapping expert invading the sonic privacy of others. The immediate cultural context of The Conversation was Watergate, the release of the Nixon tapes, and growing social anxiety over surveillance. The film's release in the wake of the most significant U.S. political scandal of the late 20th century touched a nerve with viewers and critics, who read this densely plotted tale of corporate intrigue, murder, and paranoia as a dissertation on American society in the mid-'70s. Nominated for three Academy Awards, The Conversation lost out to another Coppola film, The Godfather II, though it won the Golden Palm at Cannes.

The Conversation has been described as an "Orwellian morality play" in which the spy becomes the spied upon, and technology is used against the user. (1) In generic terms, the film is a psychological thriller that pays stylistic homage to Antonioni's Blow-Up (1966) in its use of repetition and its parsing of sounds rather than images to create ambiguity, and to Hitchcock's Psycho (1960) in its depiction of a hotel murder. It is also a political/corporate conspiracy film with a convoluted story line involving secrets, responsibility, and betrayal. In fact, the obscurity of the film's plot and illogicality of its story (how far back, for example, has the film's final betrayal been planned, and by whom?) have garnered criticism from many viewers otherwise positively disposed to its accomplishments.

Despite its structural flaws, its derivative techniques, and its rather hackneyed conspiracy theme, The Conversation transcends these limitations in its provision of a character study of haunting, if disturbing, power. Harry Caul, a character Coppola himself feared would be impossible for viewers to sympathize with, is the film's central figure, a man so obsessed with making himself unavailable to others that he has almost completely eradicated his own personality. His last name spelled out carefully over the phone, links Harry to those born with a caul, and indeed, the film is replete with images of Harry wearing an old raincoat, behind plastic curtains, and obscured by a telephone booth. (2) Harry is a surveillance genius for whom other people's privacy is an obstacle to be overcome using equipment he builds himself. He is also a man suffering intensely from guilt: one of his previous assignments resulted in the death of an entire family. This revelation, as well as the film's depiction of Harry's Catholicism (we see him at confession, an analogue of the secular eavesdropping Harry practices), complicates his detachment from others by introducing the one element that functions as the "bug" Harry can neither disable nor escape: his own conscience.

Against his own advice to his assistant, Stanley, which is not to get involved in the lives of the people they spy on, Harry becomes engaged in the conversation he has recorded between Ann (Cindy Williams) and Mark (Frederic Forrest). A man for whom sound is corporeal ("All I want is a nice, fat recording," he says to Stanley), Harry is soon pre-occupied by the attribution of motive and meaning to bits of recorded talk. Soon he is caught in and by the very technologies he has hitherto mastered, which thematize the procedures of both filmmaking and film viewing. As Harry labours in his workshop to edit the conversation for the Director (Robert Duvall), he relies on a photo of the couple to anchor his editing of the tapes. Coaxing clarity from distorted sound (distortions produced by the radio mikes used in the film's production, and brilliantly edited by Walter Murch), Harry mixes a sound track to a series of images of Ann and Mark walking around the square, watching a homeless man on a bench, and kissing good-bye.

But Harry has not actually been witness to all of these scenes, and eventually his own desire leads him to project onto the conversation the nuances of inflection and meaning that he seems merely to uncover. Isolated, lonely, but strangely vulnerable, Harry casts himself as silent, rather than white knight in a rescue drama in which he is not only ineffectual, but also betrayed by his romantic obsession with Ann and his vestigial sense of chivalry. In his creation of a narrative of Ann's oppression, persecution, and possible death, Harry acts as a film editor, marrying image track to sound track to produce a coherent story. And like the film viewer, Harry fills in narrative gaps and ambiguities, supplementing what is visible and audible with what he believes to be the truth.

In the end, Harry's romantic delusions betray him, even as his talents as a wiretapper are challenged by a superior force associated with Martin Stett (Harrison Ford), the Director's assistant. The shock and paranoia unleashed by his betrayal result in Harry's destruction of his apartment, a stripping down of surfaces clearly associated with his own abjection. Searching for a device that remains elusive, that in some sense he embodies, Harry Caul is at the point of madness by the film's conclusion. It is a madness brought on by extreme self-consciousness, with not even his faith to sustain him (he smashes the statue of the Virgin, as if it were Ann, while searching for the bug).

Coppola was correct in his assessment of Harry Caul; he is a chilling as well as pathetic character. But as The Conversation comes to a close, the camera panning like yet another piece of detached security equipment, there may at least be a trace of pity for Harry in the viewer's gaze.

Sunday, November 26, 2006


To begin with, “Intellectual Impostures” is the French/U.K. title of “Fasionable Nonsense.”


Before I address any specific statement in your text, I’d like to say that you yourself in the act of denying the validity of reading Lacan and others’ texts from a scientific viewpoint have violated the entire concept of hermeneutics. You have done so by attempting to undermine the validity of an entire interpretive community (even if it be so small as merely Sokal, Bricmont, and Dawkins – and I assure you, it does not). What makes you so privileged to assume such a dominant and egregiously exclusionary position? It is acceptable to react against an interpretive community that undermines (or attempts to undermine) the validity of yours by, in turn, undermining it itself? Doesn’t that violate the whole spirit of things?

“Unfortunately, the truths of the postmodern movement, as obscured by the common trash as they are, have equally been lost on Mr. Dawkins, Mr. Sokal and Mr. Bricmont.”

What exactly do you mean by the word “truth”? Do you mean a universal truth? Or do you mean an objective truth? Or do you simply mean what happens to be true right now (by which I mean, of course, “then”) at the moment of your writing the word “truth”? Perhaps by using the word “truth,” you meant “non-truth.” If so, did you mean the simultaneously absence and presence of truth? Most notably, it would seem that, at the very least, by your creation and application of the word “truth” has, by the very action of your writing it, attempted to assume the dominant position in a violent hierarchy of your own creating (namely, “truth” and “non-truth,” or, for the beginner, the presence of said “truth” and/versus the absence of said “truth”). So, already, it seems you miss the “point” of the philosophy you adore.

“What the authors fail to realize is that philosophy is indeed not science, and should not be read as such...” AND “Certainly it is not true that all readings are created equal, as the extreme post-modernists would have us believe, but by this token it is by no means clear in these cases that a failure to make sense of a text is the correct reading either.”

On what basis do you, presumably as the heroic defender of postmodernism, determine the correct reading of a text? Once again, you have created a violent hierarchy: “correct reading of text” in contrast to “incorrect reading of text.” And, interestingly enough, you once again assumed the dominant role over what you perceive to be an incorrect reading of a text: 1) On what basis can you state that an incorrect reading of a text is, indeed, incorrect? 2) On what basis do you determine the correct or incorrect reading of a text other than your own tastes and/or desires?

3) To remove my tongue from my cheek for a mere moment, just saying that it’s so don’t make it so: if you make an assertion (i.e. “You don’t understand ____,” which, by the way, has largely been the extent of the postmodern response to Sokal, Bricmont, and others), please be prepared to substantiate, not by purposefully complexifying the subject, but by simplifying it, as I apparently am so silly as to be unable to comprehend neither whit or word of the supposed genius(es) in question. 4) Also, it is the height of hilarity to suggest that Lacan’s texts cannot be evaluated according to any objective criteria or standard; however, Sokal and Bricmont’s critique of Lacan is held to some supposedly “objective” standard, namely, yours. If the critique is, why isn’t the original text? 5) Lastly, if Lacan’s texts are just “poetry” and “art,” then why would he pretend to be a psychoanalyst? And if he’s a psychoanalyst, why was he writing poetry (professionally, I mean)? (And if he is a psychoanalyst, why doesn’t his method of psychoanalysis produce beneficial results in his patients?)

Lastly, before you continue your admittedly spirited defense of Lacan and others on the sole basis that Sokal and others like him “do not understand” and “misinterpret” and “fail to read correctly,” please read the post immediately below this one (it’s a sixteen page essay about Lacan from the viewpoint of a former disciple and contemporary psychoanalyst, “From Lacan to Darwin”). You may find it illuminating.

UBERMEISTER SWEDLOW, I’ll address your comment later, but you may find “From Lacan to Darwin” interesting as well.

Saturday, November 25, 2006


The essay by Dylan Evans.
by Richard Dawkins

Suppose you are an intellectual impostor with nothing to say, but with strong ambitions to succeed in academic life, collect a coterie of reverent disciples and have students around the world anoint your pages with respectful yellow highlighter. What kind of literary style would you cultivate? Not a lucid one, surely, for clarity would expose your lack of content. The chances are that you would produce something like the following:

We can clearly see that there is no bi-univocal correspondence between linear signifying links or archi-writing, depending on the author, and this multireferential, multi-dimensional machinic catalysis. The symmetry of scale, the transversality, the pathic non-discursive character of their expansion: all these dimensions remove us from the logic of the excluded middle and reinforce us in our dismissal of the ontological binarism we criticised previously.

This is a quotation from the psychoanalyst Félix Guattari, one of many fashionable French 'intellectuals' outed by Alan Sokal and Jean Bricmont in their splendid book Intellectual Impostures, previously published in French and now released in a completely rewritten and revised English edition. Guattari goes on indefinitely in this vein and offers, in the opinion of Sokal and Bricmont, "the most brilliant mélange of scientific, pseudo-scientific and philosophical jargon that we have ever encountered". Guattari's close collaborator, the late Gilles Deleuze, had a similar talent for writing:

In the first place, singularities-events correspond to heterogeneous series which are organized into a system which is neither stable nor unstable, but rather 'metastable', endowed with a potential energy wherein the differences between series are distributed... In the second place, singularities possess a process of auto-unification, always mobile and displaced to the extent that a paradoxical element traverses the series and makes them resonate, enveloping the corresponding singular points in a single aleatory point and all the emissions, all dice throws, in a single cast.

This calls to mind Peter Medawar's earlier characterization of a certain type of French intellectual style (note, in passing, the contrast offered by Medawar's own elegant and clear prose):

Style has become an object of first importance, and what a style it is! For me it has a prancing, high-stepping quality, full of self-importance; elevated indeed, but in the balletic manner, and stopping from time to time in studied attitudes, as if awaiting an outburst of applause. It has had a deplorable influence on the quality of modern thought...

Returning to attack the same targets from another angle, Medawar says:

I could quote evidence of the beginnings of a whispering campaign against the virtues of clarity. A writer on structuralism in the Times Literary Supplement has suggested that thoughts which are confused and tortuous by reason of their profundity are most appropriately expressed in prose that is deliberately unclear. What a preposterously silly idea! I am reminded of an air-raid warden in wartime Oxford who, when bright moonlight seemed to be defeating the spirit of the blackout, exhorted us to wear dark glasses. He, however, was being funny on purpose.

This is from Medawar's 1968 lecture on "Science and Literature", reprinted in Pluto's Republic (Oxford University Press, 1982). Since Medawar's time, the whispering campaign has raised its voice.

Deleuze and Guattari have written and collaborated on books described by the celebrated Michel Foucault as "among the greatest of the great... Some day, perhaps, the century will be Deleuzian." Sokal and Bricmont, however, think otherwise: "These texts contain a handful of intelligible sentences -- sometimes banal, sometimes erroneous -- and we have commented on some of them in the footnotes. For the rest, we leave it to the reader to judge."

But it's tough on the reader. No doubt there exist thoughts so profound that most of us will not understand the language in which they are expressed. And no doubt there is also language designed to be unintelligible in order to conceal an absence of honest thought. But how are we to tell the difference? What if it really takes an expert eye to detect whether the emperor has clothes? In particular, how shall we know whether the modish French 'philosophy', whose disciples and exponents have all but taken over large sections of American academic life, is genuinely profound or the vacuous rhetoric of mountebanks and charlatans?

Sokal and Bricmont are professors of physics at, respectively, New York University and the University of Louvain in Belgium. They have limited their critique to those books that have ventured to invoke concepts from physics and mathematics. Here they know what they are talking about, and their verdict is unequivocal. On Jacques Lacan, for example, whose name is revered by many in humanities departments throughout US and British universities, no doubt partly because he simulates a profound understanding of mathematics:

... although Lacan uses quite a few key words from the mathematical theory of compactness, he mixes them up arbitrarily and without the slightest regard for their meaning. His 'definition' of compactness is not just false: it is gibberish.

They go on to quote the following remarkable piece of reasoning by Lacan:

Thus, by calculating that signification according to the algebraic method used here, namely:

You don't have to be a mathematician to see that this is ridiculous. It recalls the Aldous Huxley character who proved the existence of God by dividing zero into a number, thereby deriving the infinite. In a further piece of reasoning that is entirely typical of the genre, Lacan goes on to conclude that the erectile organ

... is equivalent to the of the signification produced above, of the jouissance that it restores by the coefficient of its statement to the function of lack of signifier (-1).

We do not need the mathematical expertise of Sokal and Bricmont to assure us that the author of this stuff is a fake. Perhaps he is genuine when he speaks of non-scientific subjects? But a philosopher who is caught equating the erectile organ to the square root of minus one has, for my money, blown his credentials when it comes to things that I don't know anything about.

The feminist 'philosopher' Luce Irigaray is another who gets whole-chapter treatment from Sokal and Bricmont. In a passage reminiscent of a notorious feminist description of Newton's Principia (a "rape manual"), Irigaray argues that E=mc2 is a "sexed equation". Why? Because "it privileges the speed of light over other speeds that are vitally necessary to us" (my emphasis of what I am rapidly coming to learn is an 'in' word). Just as typical of this school of thought is Irigaray's thesis on fluid mechanics. Fluids, you see, have been unfairly neglected. "Masculine physics" privileges rigid, solid things. Her American expositor Katherine Hayles made the mistake of re-expressing Irigaray's thoughts in (comparatively) clear language. For once, we get a reasonably unobstructed look at the emperor and, yes, he has no clothes:

The privileging of solid over fluid mechanics, and indeed the inability of science to deal with turbulent flow at all, she attributes to the association of fluidity with femininity. Whereas men have sex organs that protrude and become rigid, women have openings that leak menstrual blood and vaginal fluids... From this perspective it is no wonder that science has not been able to arrive at a successful model for turbulence. The problem of turbulent flow cannot be solved because the conceptions of fluids (and of women) have been formulated so as necessarily to leave unarticulated remainders.

You do not have to be a physicist to smell out the daffy absurdity of this kind of argument (the tone of it has become all too familiar), but it helps to have Sokal and Bricmont on hand to tell us the real reason why turbulent flow is a hard problem: the Navier-Stokes equations are difficult to solve.

In similar manner, Sokal and Bricmont expose Bruno Latour's confusion of relativity with relativism, Jean-François Lyotard's 'post-modern science', and the widespread and predictable misuses of Gödel's Theorem, quantum theory and chaos theory. The renowned Jean Baudrillard is only one of many to find chaos theory a useful tool for bamboozling readers. Once again, Sokal and Bricmont help us by analysing the tricks being played. The following sentence, "though constructed from scientific terminology, is meaningless from a scientific point of view":

Perhaps history itself has to be regarded as a chaotic formation, in which acceleration puts an end to linearity and the turbulence created by acceleration deflects history definitively from its end, just as such turbulence distances effects from their causes.

I won't quote any more, for, as Sokal and Bricmont say, Baudrillard's text "continues in a gradual crescendo of nonsense". They again call attention to "the high density of scientific and pseudo-scientific terminology -- inserted in sentences that are, as far as we can make out, devoid of meaning". Their summing up of Baudrillard could stand for any of the authors criticized here and lionized throughout America:

In summary, one finds in Baudrillard's works a profusion of scientific terms, used with total disregard for their meaning and, above all, in a context where they are manifestly irrelevant. Whether or not one interprets them as metaphors, it is hard to see what role they could play, except to give an appearance of profundity to trite observations about sociology or history. Moreover, the scientific terminology is mixed up with a non-scientific vocabulary that is employed with equal sloppiness. When all is said and done, one wonders what would be left of Baudrillard's thought if the verbal veneer covering it were stripped away.

But don't the postmodernists claim only to be 'playing games'? Isn't the whole point of their philosophy that anything goes, there is no absolute truth, anything written has the same status as anything else, and no point of view is privileged? Given their own standards of relative truth, isn't it rather unfair to take them to task for fooling around with word games, and playing little jokes on readers? Perhaps, but one is then left wondering why their writings are so stupefyingly boring. Shouldn't games at least be entertaining, not po-faced, solemn and pretentious? More tellingly, if they are only joking, why do they react with such shrieks of dismay when somebody plays a joke at their expense? The genesis of Intellectual Impostures was a brilliant hoax perpetrated by Sokal, and the stunning success of his coup was not greeted with the chuckles of delight that one might have hoped for after such a feat of deconstructive game playing. Apparently, when you've become the establishment, it ceases to be funny when someone punctures the established bag of wind.

As is now rather well known, in 1996 Sokal submitted to the US journal Social Text a paper called "Transgressing the boundaries: towards a transformative hermeneutics of quantum gravity". From start to finish the paper was nonsense. It was a carefully crafted parody of postmodern metatwaddle. Sokal was inspired to do this by Paul Gross and Norman Levitt's Higher Superstition: The Academic Left and its Quarrels with Science (Johns Hopkins University Press, 1994), an important book that deserves to become as well known in Britain as it is in the United States. Hardly able to believe what he read in this book, Sokal followed up the references to postmodern literature, and found that Gross and Levitt did not exaggerate. He resolved to do something about it. In the words of the journalist Gary Kamiya:

Anyone who has spent much time wading through the pious, obscurantist, jargon-filled cant that now passes for 'advanced' thought in the humanities knew it was bound to happen sooner or later: some clever academic, armed with the not-so-secret passwords ('hermeneutics,' 'transgressive,' 'Lacanian,' 'hegemony', to name but a few) would write a completely bogus paper, submit it to an au courant journal, and have it accepted... Sokal's piece uses all the right terms. It cites all the best people. It whacks sinners (white men, the 'real world'), applauds the virtuous (women, general metaphysical lunacy)... And it is complete, unadulterated bullshit -- a fact that somehow escaped the attention of the high-powered editors of Social Text, who must now be experiencing that queasy sensation that afflicted the Trojans the morning after they pulled that nice big gift horse into their city.

Sokal's paper must have seemed a gift to the editors because this was a physicist saying all the right-on things they wanted to hear, attacking the 'post-Enlightenment hegemony' and such uncool notions as the existence of the real world. They didn't know that Sokal had also crammed his paper with egregious scientific howlers, of a kind that any referee with an undergraduate degree in physics would instantly have detected. It was sent to no such referee. The editors, Andrew Ross and others, were satisfied that its ideology conformed to their own, and were perhaps flattered by references to their own works. This ignominious piece of editing rightly earned them the 1996 Ig Nobel prize for literature.

Notwithstanding the egg all over their faces, and despite their feminist pretensions, these editors are dominant males in the academic establishment. Ross has the boorish, tenured confidence to say things like, "I am glad to be rid of English departments. I hate literature, for one thing, and English departments tend to be full of people who love literature"; and the yahooish complacency to begin a book on 'science studies' with these words: "This book is dedicated to all of the science teachers I never had. It could only have been written without them."

He and his fellow 'cultural studies' and 'science studies' barons are not harmless eccentrics at third-rate state colleges. Many of them have tenured professorships at some of the best universities in the United States. Men of this kind sit on appointment committees, wielding power over young academics who might secretly aspire to an honest academic career in literary studies or, say, anthropology. I know -- because many of them have told me -- that there are sincere scholars out there who would speak out if they dared, but who are intimidated into silence. To them, Sokal will appear as a hero, and nobody with a sense of humour or a sense of justice will disagree. It helps, by the way, although it is strictly irrelevant, that his own left-wing credentials are impeccable.

In a detailed post-mortem of his famous hoax, submitted to Social Text but predictably rejected by them and published elsewhere, Sokal notes that, in addition to numerous half-truths, falsehoods and non sequiturs, his original article contained some "syntactically correct sentences that have no meaning whatsoever". He regrets that there were not more of these: "I tried hard to produce them, but I found that, save for rare bursts of inspiration, I just didn't have the knack." If he were writing his parody today, he would surely be helped by a virtuoso piece of computer programming by Andrew Bulhak of Melbourne, Australia: the Postmodernism Generator. Every time you visit it, at, it will spontaneously generate for you, using faultless grammatical principles, a spanking new postmodern discourse, never before seen.

I have just been there, and it produced for me a 6,000-word article called "Capitalist theory and the subtextual paradigm of context" by "David I. L.Werther and Rudolf du Garbandier of the Department of English, Cambridge University" (poetic justice there, for it was Cambridge that saw fit to give Jacques Derrida an honorary degree). Here is a typical passage from this impressively erudite work:

If one examines capitalist theory, one is faced with a choice: either reject neotextual materialism or conclude that society has objective value. If dialectic desituationism holds, we have to choose between Habermasian discourse and the subtextual paradigm of context. It could be said that the subject is contextualised into a textual nationalism that includes truth as a reality. In a sense, the premise of the subtextual paradigm of context states that reality comes from the collective unconscious.

Visit the Postmodernism Generator. It is a literally infinite source of randomly generated, syntactically correct nonsense, distinguishable from the real thing only in being more fun to read. You could generate thousands of papers per day, each one unique and ready for publication, complete with numbered endnotes. Manuscripts should be submitted to the 'Editorial Collective' of Social Text, double-spaced and in triplicate.

As for the harder task of reclaiming US literary departments for genuine scholars, Sokal and Bricmont have joined Gross and Levitt in giving a friendly and sympathetic lead from the world of science. We must hope that it will be followed.

...As for the "deconstruction" that is carried out (also mentioned in the debate), I can't comment, because most of it seems to me gibberish. But if this is just another sign of my incapacity to recognize profundities, the course to follow is clear: just restate the results to me in plain words that I can understand, and show why they are different from, or better than, what others had been doing long before and and have continued to do since without three-syllable words, incoherent sentences, inflated rhetoric that (to me, at least) is largely meaningless, etc. That will cure my deficiencies --- of course, if they are curable; maybe they aren't, a possibility to which I'll return.

These are very easy requests to fulfill, if there is any basis to the claims put forth with such fervor and indignation. But instead of trying to provide an answer to this simple requests, the response is cries of anger: to raise these questions shows "elitism," "anti-intellectualism," and other crimes --- though apparently it is not "elitist" to stay within the self- and mutual-admiration societies of intellectuals who talk only to one another and (to my knowledge) don't enter into the kind of world in which I'd prefer to live. As for that world, I can reel off my speaking and writing schedule to illustrate what I mean, though I presume that most people in this discussion know, or can easily find out; and somehow I never find the "theoreticians" there, nor do I go to their conferences and parties. In short, we seem to inhabit quite different worlds, and I find it hard to see why mine is "elitist," not theirs. The opposite seems to be transparently the case, though I won't amplify.

To add another facet, I am absolutely deluged with requests to speak and can't possibly accept a fraction of the invitations I'd like to, so I suggest other people. But oddly, I never suggest those who propound "theories" and "philosophy," nor do I come across them, or for that matter rarely even their names, in my own (fairly extensive) experience with popular and activist groups and organizations, general community, college, church, union, etc., audiences here and abroad, third world women, refugees, etc.; I can easily give examples. Why, I wonder.

The whole debate, then, is an odd one. On one side, angry charges and denunciations, on the other, the request for some evidence and argument to support them, to which the response is more angry charges --- but, strikingly, no evidence or argument. Again, one is led to ask why.

It's entirely possible that I'm simply missing something, or that I just lack the intellectual capacity to understand the profundities that have been unearthed in the past 20 years or so by Paris intellectuals and their followers. I'm perfectly open-minded about it, and have been for years, when similar charges have been made -- but without any answer to my questions. Again, they are simple and should be easy to answer, if there is an answer: if I'm missing something, then show me what it is, in terms I can understand. Of course, if it's all beyond my comprehension, which is possible, then I'm just a lost cause, and will be compelled to keep to things I do seem to be able to understand, and keep to association with the kinds of people who also seem to be interested in them and seem to understand them (which I'm perfectly happy to do, having no interest, now or ever, in the sectors of the intellectual culture that engage in these things, but apparently little else).

Since no one has succeeded in showing me what I'm missing, we're left with the second option: I'm just incapable of understanding. I'm certainly willing to grant that it may be true, though I'm afraid I'll have to remain suspicious, for what seem good reasons. There are lots of things I don't understand -- say, the latest debates over whether neutrinos have mass or the way that Fermat's last theorem was (apparently) proven recently. But from 50 years in this game, I have learned two things: (1) I can ask friends who work in these areas to explain it to me at a level that I can understand, and they can do so, without particular difficulty; (2) if I'm interested, I can proceed to learn more so that I will come to understand it. Now Derrida, Lacan, Lyotard, Kristeva, etc. --- even Foucault, whom I knew and liked, and who was somewhat different from the rest --- write things that I also don't understand, but (1) and (2) don't hold: no one who says they do understand can explain it to me and I haven't a clue as to how to proceed to overcome my failures. That leaves one of two possibilities: (a) some new advance in intellectual life has been made, perhaps some sudden genetic mutation, which has created a form of "theory" that is beyond quantum theory, topology, etc., in depth and profundity; or (b) ... I won't spell it out.

Again, I've lived for 50 years in these worlds, have done a fair amount of work of my own in fields called "philosophy" and "science," as well as intellectual history, and have a fair amount of personal acquaintance with the intellectual culture in the sciences, humanities, social sciences, and the arts. That has left me with my own conclusions about intellectual life, which I won't spell out. But for others, I would simply suggest that you ask those who tell you about the wonders of "theory" and "philosophy" to justify their claims --- to do what people in physics, math, biology, linguistics, and other fields are happy to do when someone asks them, seriously, what are the principles of their theories, on what evidence are they based, what do they explain that wasn't already obvious, etc. These are fair requests for anyone to make. If they can't be met, then I'd suggest recourse to Hume's advice in similar circumstances: to the flames.

Specific comment. Phetland asked who I'm referring to when I speak of "Paris school" and "postmodernist cults": the above is a sample.

He then asks, reasonably, why I am "dismissive" of it. Take, say, Derrida. Let me begin by saying that I dislike making the kind of comments that follow without providing evidence, but I doubt that participants want a close analysis of de Saussure, say, in this forum, and I know that I'm not going to undertake it. I wouldn't say this if I hadn't been explicitly asked for my opinion --- and if asked to back it up, I'm going to respond that I don't think it merits the time to do so.

So take Derrida, one of the grand old men. I thought I ought to at least be able to understand his Grammatology, so tried to read it. I could make out some of it, for example, the critical analysis of classical texts that I knew very well and had written about years before. I found the scholarship appalling, based on pathetic misreading; and the argument, such as it was, failed to come close to the kinds of standards I've been familiar with since virtually childhood. Well, maybe I missed something: could be, but suspicions remain, as noted. Again, sorry to make unsupported comments, but I was asked, and therefore am answering.

Some of the people in these cults (which is what they look like to me) I've met: Foucault (we even have a several-hour discussion, which is in print, and spent quite a few hours in very pleasant conversation, on real issues, and using language that was perfectly comprehensible --- he speaking French, me English); Lacan (who I met several times and considered an amusing and perfectly self-conscious charlatan, though his earlier work, pre-cult, was sensible and I've discussed it in print); Kristeva (who I met only briefly during the period when she was a fervent Maoist); and others. Many of them I haven't met, because I am very remote from from these circles, by choice, preferring quite different and far broader ones --- the kinds where I give talks, have interviews, take part in activities, write dozens of long letters every week, etc. I've dipped into what they write out of curiosity, but not very far, for reasons already mentioned: what I find is extremely pretentious, but on examination, a lot of it is simply illiterate, based on extraordinary misreading of texts that I know well (sometimes, that I have written), argument that is appalling in its casual lack of elementary self-criticism, lots of statements that are trivial (though dressed up in complicated verbiage) or false; and a good deal of plain gibberish. When I proceed as I do in other areas where I do not understand, I run into the problems mentioned in connection with (1) and (2) above. So that's who I'm referring to, and why I don't proceed very far. I can list a lot more names if it's not obvious.

For those interested in a literary depiction that reflects pretty much the same perceptions (but from the inside), I'd suggest David Lodge. Pretty much on target, as far as I can judge.

Phetland also found it "particularly puzzling" that I am so "curtly dismissive" of these intellectual circles while I spend a lot of time "exposing the posturing and obfuscation of the New York Times." So "why not give these guys the same treatment." Fair question. There are also simple answers. What appears in the work I do address (NYT, journals of opinion, much of scholarship, etc.) is simply written in intelligible prose and has a great impact on the world, establishing the doctrinal framework within which thought and expression are supposed to be contained, and largely are, in successful doctrinal systems such as ours. That has a huge impact on what happens to suffering people throughout the world, the ones who concern me, as distinct from those who live in the world that Lodge depicts (accurately, I think). So this work should be dealt with seriously, at least if one cares about ordinary people and their problems. The work to which Phetland refers has none of these characteristics, as far as I'm aware. It certainly has none of the impact, since it is addressed only to other intellectuals in the same circles. Furthermore, there is no effort that I am aware of to make it intelligible to the great mass of the population (say, to the people I'm constantly speaking to, meeting with, and writing letters to, and have in mind when I write, and who seem to understand what I say without any particular difficulty, though they generally seem to have the same cognitive disability I do when facing the postmodern cults). And I'm also aware of no effort to show how it applies to anything in the world in the sense I mentioned earlier: grounding conclusions that weren't already obvious. Since I don't happen to be much interested in the ways that intellectuals inflate their reputations, gain privilege and prestige, and disengage themselves from actual participation in popular struggle, I don't spend any time on it.

Phetland suggests starting with Foucault --- who, as I've written repeatedly, is somewhat apart from the others, for two reasons: I find at least some of what he writes intelligible, though generally not very interesting; second, he was not personally disengaged and did not restrict himself to interactions with others within the same highly privileged elite circles. Phetland then does exactly what I requested: he gives some illustrations of why he thinks Foucault's work is important. That's exactly the right way to proceed, and I think it helps understand why I take such a "dismissive" attitude towards all of this --- in fact, pay no attention to it.

What Phetland describes, accurately I'm sure, seems to me unimportant, because everyone always knew it --- apart from details of social and intellectual history, and about these, I'd suggest caution: some of these are areas I happen to have worked on fairly extensively myself, and I know that Foucault's scholarship is just not trustworthy here, so I don't trust it, without independent investigation, in areas that I don't know --- this comes up a bit in the discussion from 1972 that is in print. I think there is much better scholarship on the 17th and 18th century, and I keep to that, and my own research. But let's put aside the other historical work, and turn to the "theoretical constructs" and the explanations: that there has been "a great change from harsh mechanisms of repression to more subtle mechanisms by which people come to do" what the powerful want, even enthusiastically. That's true enough, in fact, utter truism. If that's a "theory," then all the criticisms of me are wrong: I have a "theory" too, since I've been saying exactly that for years, and also giving the reasons and historical background, but without describing it as a theory (because it merits no such term), and without obfuscatory rhetoric (because it's so simple-minded), and without claiming that it is new (because it's a truism). It's been fully recognized for a long time that as the power to control and coerce has declined, it's more necessary to resort to what practitioners in the PR industry early in this century -- who understood all of this well -- called "controlling the public mind." The reasons, as observed by Hume in the 18th century, are that "the implicit submission with which men resign their own sentiments and passions to those of their rulers" relies ultimately on control of opinion and attitudes. Why these truisms should suddenly become "a theory" or "philosophy," others will have to explain; Hume would have laughed.

Some of Foucault's particular examples (say, about 18th century techniques of punishment) look interesting, and worth investigating as to their accuracy. But the "theory" is merely an extremely complex and inflated restatement of what many others have put very simply, and without any pretense that anything deep is involved. There's nothing in what Phetland describes that I haven't been writing about myself for 35 years, also giving plenty of documentation to show that it was always obvious, and indeed hardly departs from truism. What's interesting about these trivialities is not the principle, which is transparent, but the demonstration of how it works itself out in specific detail to cases that are important to people: like intervention and aggression, exploitation and terror, "free market" scams, and so on. That I don't find in Foucault, though I find plenty of it by people who seem to be able to write sentences I can understand and who aren't placed in the intellectual firmament as "theoreticians."

To make myself clear, Phetland is doing exactly the right thing: presenting what he sees as "important insights and theoretical constructs" that he finds in Foucault. My problem is that the "insights" seem to me familiar and there are no "theoretical constructs," except in that simple and familiar ideas have been dressed up in complicated and pretentious rhetoric. Phetland asks whether I think this is "wrong, useless, or posturing." No. The historical parts look interesting sometimes, though they have to be treated with caution and independent verification is even more worth undertaking than it usually is. The parts that restate what has long been obvious and put in much simpler terms are not "useless," but indeed useful, which is why I and others have always made the very same points. As to "posturing," a lot of it is that, in my opinion, though I don't particularly blame Foucault for it: it's such a deeply rooted part of the corrupt intellectual culture of Paris that he fell into it pretty naturally, though to his credit, he distanced himself from it. As for the "corruption" of this culture particularly since World War II, that's another topic, which I've discussed elsewhere and won't go into here. Frankly, I don't see why people in this forum should be much interested, just as I am not. There are more important things to do, in my opinion, than to inquire into the traits of elite intellectuals engaged in various careerist and other pursuits in their narrow and (to me, at least) pretty unininteresting circles. That's a broad brush, and I stress again that it is unfair to make such comments without proving them: but I've been asked, and have answered the only specific point that I find raised. When asked about my general opinion, I can only give it, or if something more specific is posed, address that. I'm not going to undertake an essay on topics that don't interest me.

Unless someone can answer the simple questions that immediately arise in the mind of any reasonable person when claims about "theory" and "philosophy" are raised, I'll keep to work that seems to me sensible and enlightening, and to people who are interested in understanding and changing the world.

Johnb made the point that "plain language is not enough when the frame of reference is not available to the listener"; correct and important. But the right reaction is not to resort to obscure and needlessly complex verbiage and posturing about non-existent "theories." Rather, it is to ask the listener to question the frame of reference that he/she is accepting, and to suggest alternatives that might be considered, all in plain language. I've never found that a problem when I speak to people lacking much or sometimes any formal education, though it's true that it tends to become harder as you move up the educational ladder, so that indoctrination is much deeper, and the self-selection for obedience that is a good part of elite education has taken its toll. Johnb says that outside of circles like this forum, "to the rest of the country, he's incomprehensible" ("he" being me). That's absolutely counter to my rather ample experience, with all sorts of audiences. Rather, my experience is what I just described. The incomprehensibility roughly corresponds to the educational level. Take, say, talk radio. I'm on a fair amount, and it's usually pretty easy to guess from accents, etc., what kind of audience it is. I've repeatedly found that when the audience is mostly poor and less educated, I can skip lots of the background and "frame of reference" issues because it's already obvious and taken for granted by everyone, and can proceed to matters that occupy all of us. With more educated audiences, that's much harder; it's necessary to disentangle lots of ideological constructions.

It's certainly true that lots of people can't read the books I write. That's not because the ideas or language are complicated --- we have no problems in informal discussion on exactly the same points, and even in the same words. The reasons are different, maybe partly the fault of my writing style, partly the result of the need (which I feel, at least) to present pretty heavy documentation, which makes it tough reading. For these reasons, a number of people have taken pretty much the same material, often the very same words, and put them in pamphlet form and the like. No one seems to have much problem --- though again, reviewers in the Times Literary Supplement or professional academic journals don't have a clue as to what it's about, quite commonly; sometimes it's pretty comical.

A final point, something I've written about elsewhere (e.g., in a discussion in Z papers, and the last chapter of Year 501). There has been a striking change in the behavior of the intellectual class in recent years. The left intellectuals who 60 years ago would have been teaching in working class schools, writing books like "mathematics for the millions" (which made mathematics intelligible to millions of people), participating in and speaking for popular organizations, etc., are now largely disengaged from such activities, and although quick to tell us that they are far more radical than thou, are not to be found, it seems, when there is such an obvious and growing need and even explicit request for the work they could do out there in the world of people with live problems and concerns. That's not a small problem. This country, right now, is in a very strange and ominous state. People are frightened, angry, disillusioned, skeptical, confused. That's an organizer's dream, as I once heard Mike say. It's also fertile ground for demagogues and fanatics, who can (and in fact already do) rally substantial popular support with messages that are not unfamiliar from their predecessors in somewhat similar circumstances. We know where it has led in the past; it could again. There's a huge gap that once was at least partially filled by left intellectuals willing to engage with the general public and their problems. It has ominous implications, in my opinion.

End of Reply, and (to be frank) of my personal interest in the matter, unless the obvious questions are answered.

Friday, November 24, 2006


There was a time when scientists looked askance at attempts to make their work widely intelligible. But, in the world of the present day, such an attitude is no longer possible. The discoveries of modern science have put into the hands of governments unprecedented powers both for good and for evil. Unless the statesmen who wield these powers have at least an elementary understanding of their nature, it is scarcely likely that they will use them wisely. And, in democratic countries, it is not only statesmen, but the general public, to whom some degree of scientific understanding is necessary.

To insure wide diffusion of such understanding is by no means easy. Those who can act effectively as liaison officers between technical scientists and the public perform a work which is necessary, not only for human welfare, but even for bare survival of the human race. I think that a great deal more ought to be done in this direction in the education of those who do not intend to become scientific specialists. The Kalinga Prize is doing a great public service in encouraging those who attempt this difficult task.

In my own country, and to a lesser degree in other countries of the West, "culture" is viewed mainly, by an unfortunate impoverishment of the Renaissance tradition, as something concerned primarily with literature, history and art. A man is not considered uneducated if he knows nothing of the contributions of Galileo, Descartes and their successors. I am convinced that all higher education should involve a course in the history of science from the seventeenth century to the present day and a survey of modern scientific knowledge in so far as this can be conveyed without technicalities. While such knowledge remains confined to specialists, it is scarcely possible nowadays for nations to conduct their affairs with wisdom.

There are two very different ways of estimating any human achievement: you may estimate it by what you consider its intrinsic excellence; or you may estimate it by its causal efficiency in transforming human life and human institutions. I am not suggesting that one of these ways of estimating is preferable to the other. I am only concerned to point out that they give very different scales of importance. If Homer and Aeschylus had not existed, if Dante and Shakespeare had not written a line, if Bach and Beethoven had been silent, the daily life of most people in the present day would have been much what it is. But if Pythagoras and Galileo and James Watt had not existed, the daily life, not only of Western Europeans and Americans but of Indian, Russian and Chinese peasants, would be profoundly different from what it is. And these profound changes are still only beginning. They must affect the future even more than they have already affected the present.

At present, scientific technique advances like an army of tanks that have lost their drivers, blindly, ruthlessly, without goal or purpose. This is largely because the men who are concerned with human values and with making life worthy to be lived, are still living in imagination in the old pre-industrial world, the world that has been made familiar and comfortable by the literature of Greece and the pre-industrial achievements of the poets and artists and composers whose work we rightly admire.

The separation of science from "culture" is a modern phenomenon. Plato and Aristotle had a profound respect for what was known of science in their day. The Renaissance was as much concerned with the revival of science as with art and literature. Leonardo da Vinci devoted more of his energies to science than to painting. The Renaissance artists developed the geometrical theory of perspective. Throughout the eighteenth century a very great deal was done to diffuse understanding of the work of Newton and his contemporaries. But, from the early nineteenth century onwards, scientific concepts and scientific methods became increasingly abstruse and the attempt to make them generally intelligible came more and more to be regarded as hopeless. The modern theory and practice of nuclear physicists has made evident with dramatic suddenness that complete ignorance of the world of science is no longer compatible with survival.

The above is the text of an address delivered by Bertrand Russell on January 28, 1958 at unesco House, Paris, on receiving the Kalinga Prize for the Popularization of Science, and published in the unesco Courier, v.11, no.1, (Feb. 1958), 4.
by Bertrand Russell

The eminent theologian Dr. Thaddeus dreamt that he died and pursued his course toward heaven. His studies had prepared him and he had no difficulty in finding the way. He knocked at the door of heaven, and was met with a closer scrutiny than he expected. "I ask admission," he said, "because I was a good man and devoted my life to the glory of God." "Man?" said the janitor, "What is that? And how could such a funny creature as you do anything to promote the glory of God?" Dr. Thaddeus was astonished. "You surely cannot be ignorant of man. You must be aware that man is the supreme work of the Creator." "As to that," said the janitor, "I am sorry to hurt your feelings, but what you're saying is news to me. I doubt if anybody up here has ever heard of this thing you call 'man.' However, since you seem distressed, you shall have a chance of consulting our librarian."

The librarian, a globular being with a thousand eyes and one mouth, bent some of his eyes upon Dr. Thaddeus. "What is this?" he asked the janitor. "This," replied the janitor, "says that it is a member of a species called 'man,' which lives in a place called 'Earth.' It has some odd notion that the Creator takes a special interest in this place and this species. I thought perhaps you could enlighten it." "Well," said the librarian kindly to the theologian, "perhaps you can tall me where this place is that you call 'Earth.'" "Oh," said the theologian, "it's part of the Solar System." "And what is the Solar System?" asked the librarian. "Oh," said the theologian, somewhat disconcerted, "my province was Sacred Knowledge, but the question that you are asking belongs to profane knowledge. However, I have learnt enough from my astronomical friends to be able to tell you that the Solar System is part of the Milky Way." "And what is the Milky Way?" asked the librarian. "Oh, the Milky Way is one of the Galaxies, of which, I am told, there are some hundred million." "Well, well," said the librarian, "you could hardly expect me to remember one out of so many. But I do remember to have heard the word galaxy' before. In fact, I believe that one of our sub-librarians specializes in galaxies. Let us send for him and see whether he can help."

After no very long time, the galactic sub-librarian made his appearance. In shape, he was a dodecahedron. It was clear that at one time his surface had been bright, but the dust of the shelves had rendered him dim and opaque. The librarian explained to him that Dr. Thaddeus, in endeavoring to account for his origin, had mentioned galaxies, and it was hoped that information could be obtained from the galactic section of the library. "Well," said the sub-librarian, "I suppose it might become possible in time, but as there are a hundred million galaxies, and each has a volume to itself, it takes some time to find any particular volume. Which is it that this odd molecule desires?" "It is the one called 'The Milky Way,'" Dr. Thaddeus falteringly replied. "All right," said the sub- librarian, "I will find it if I can."

Some three weeks later, he returned, explaining that the extraordinarily efficient card index in the galactic section of the library had enabled him to locate the galaxy as number QX 321,762. "We have employed," he said, "all the five thousand clerks in the galactic section on this search. Perhaps you would like to see the clerk who is specially concerned with the galaxy in question?" The clerk was sent for and turned out to be an octahedron with an eye in each face and a mouth in one of them. He was surprised and dazed to find himself in such a glittering region, away from the shadowy limbo of his shelves. Pulling himself together, he asked, rather shyly, "What is it you wish to know about my galaxy?" Dr. Thaddeus spoke up: "What I want is to know about the Solar System, a collection of heavenly bodies revolving about one of the stars in your galaxy. The star about which they revolve is called 'the Sun.'" "Humph," said the librarian of the Milky Way, "it was hard enough to hit upon the right galaxy, but to hit upon the right star in the galaxy is far more difficult. I know that there are about three hundred billion stars in the galaxy, but I have no knowledge, myself, that would distinguish one of them from another. I believe, however, that at one time a list of the whole three hundred billion was demanded by the Administration and that it is still stored in the basement. If you think it worth while, I will engage special labor from the Other Place to search for this particular star."

It was agreed that, since the question had arisen and since Dr. Thaddeus was evidently suffering some distress, this might be the wisest course.

Several years later, a very weary and dispirited tetrahedron presented himself before the galactic sub-librarian. "I have," he said, "at last discovered the particular star concerning which inquiries have been made, but I am quite at a loss to imagine why it has aroused any special interest. It closely resembles a great many other stars in the same galaxy. It is of average size and temperature, and is surrounded by very much smaller bodies called 'planets.' After minute investigation, I discovered that some, at least, of these planets have parasites, and I think that this thing which has been making inquiries must be one of them."

At this point, Dr. Thaddeus burst out in a passionate and indignant lament: "Why, oh why, did the Creator conceal from us poor inhabitants of Earth that it was not we who prompted Him to create the Heavens? Throughout my long life, I have served Him diligently, believing that He would notice my service and reward me with Eternal Bliss. And now, it seems that He was not even aware that I existed. You tell me that I am an infinitesimal animalcule on a tiny body revolving round an insignificant member of a collection of three hundred billion stars, which is only one of many millions of such collections. I cannot bear it, and can no longer adore my Creator." "Very well," said the janitor, "then you can go to the Other Place."

Here the theologian awoke. "The power of Satan over our sleeping imagination is terrifying," he muttered.

He looks like something from a prehistoric age or a fantastic creation from Hollywood. But Hercules is very much living flesh and blood - as he proves every time he opens his gigantic mouth to roar. Part lion, part tiger, he is not just a big cat but a huge one,standing 10ft tall on his back legs. Called a liger, in reference to his crossbreed parentage, he is the largest of all the cat species.

He is the accidental result of two enormous big cats living close together at the Institute of Greatly Endangered and Rare Species, in Miami, Florida, and already dwarfs both his parents.

"Ligers are not something we planned on having," said institute owner Dr Bhagavan Antle. "We have lions and tigers living together in large enclosures and at first we had no idea how well one of the lion boys was getting along with a tiger girl, then lo and behold we had a liger."

50mph runner... Not only that, but he likes to swim, a feat unheard of among water-fearing lions. In the wild it is virtually impossible for lions and tigers to mate. Not only are they enemies likely to kill one another, but most lions are in Africa and most tigers in Asia. But incredible though he is, Hercules is not unique. Ligers have been bred in captivity, deliberately and accidentally, since shortly before World War II.

Today there are believed to be a handful of ligers around the world.



Hybrid Big Cats #1
Hybrid Big Cats #2
Ligers in Georgia.

Thursday, November 23, 2006


(from Requiem for a Dream)

Tuesday, November 21, 2006

by H. Allen Orr
from The New Yorker

If you are in ninth grade and live in Dover, Pennsylvania, you are learning things in your biology class that differ considerably from what your peers just a few miles away are learning. In particular, you are learning that Darwin’s theory of evolution provides just one possible explanation of life, and that another is provided by something called intelligent design. You are being taught this not because of a recent breakthrough in some scientist’s laboratory but because the Dover Area School District’s board mandates it. In October, 2004, the board decreed that “students will be made aware of gaps/problems in Darwin’s theory and of other theories of evolution including, but not limited to, intelligent design.”

While the events in Dover have received a good deal of attention as a sign of the political times, there has been surprisingly little discussion of the science that’s said to underlie the theory of intelligent design, often called I.D. Many scientists avoid discussing I.D. for strategic reasons. If a scientific claim can be loosely defined as one that scientists take seriously enough to debate, then engaging the intelligent-design movement on scientific grounds, they worry, cedes what it most desires: recognition that its claims are legitimate scientific ones.

Meanwhile, proposals hostile to evolution are being considered in more than twenty states; earlier this month, a bill was introduced into the New York State Assembly calling for instruction in intelligent design for all public-school students. The Kansas State Board of Education is weighing new standards, drafted by supporters of intelligent design, that would encourage schoolteachers to challenge Darwinism. Senator Rick Santorum, a Pennsylvania Republican, has argued that “intelligent design is a legitimate scientific theory that should be taught in science classes.” An I.D.-friendly amendment that he sponsored to the No Child Left Behind Act—requiring public schools to help students understand why evolution “generates so much continuing controversy”—was overwhelmingly approved in the Senate. (The amendment was not included in the version of the bill that was signed into law, but similar language did appear in a conference report that accompanied it.) In the past few years, college students across the country have formed Intelligent Design and Evolution Awareness chapters. Clearly, a policy of limited scientific engagement has failed. So just what is this movement?

First of all, intelligent design is not what people often assume it is. For one thing, I.D. is not Biblical literalism. Unlike earlier generations of creationists—the so-called Young Earthers and scientific creationists—proponents of intelligent design do not believe that the universe was created in six days, that Earth is ten thousand years old, or that the fossil record was deposited during Noah’s flood. (Indeed, they shun the label “creationism” altogether.) Nor does I.D. flatly reject evolution: adherents freely admit that some evolutionary change occurred during the history of life on Earth. Although the movement is loosely allied with, and heavily funded by, various conservative Christian groups—and although I.D. plainly maintains that life was created—it is generally silent about the identity of the creator.

The movement’s main positive claim is that there are things in the world, most notably life, that cannot be accounted for by known natural causes and show features that, in any other context, we would attribute to intelligence. Living organisms are too complex to be explained by any natural—or, more precisely, by any mindless—process. Instead, the design inherent in organisms can be accounted for only by invoking a designer, and one who is very, very smart.

All of which puts I.D. squarely at odds with Darwin. Darwin’s theory of evolution was meant to show how the fantastically complex features of organisms—eyes, beaks, brains—could arise without the intervention of a designing mind. According to Darwinism, evolution largely reflects the combined action of random mutation and natural selection. A random mutation in an organism, like a random change in any finely tuned machine, is almost always bad. That’s why you don’t, screwdriver in hand, make arbitrary changes to the insides of your television. But, once in a great while, a random mutation in the DNA that makes up an organism’s genes slightly improves the function of some organ and thus the survival of the organism. In a species whose eye amounts to nothing more than a primitive patch of light-sensitive cells, a mutation that causes this patch to fold into a cup shape might have a survival advantage. While the old type of organism can tell only if the lights are on, the new type can detect the direction of any source of light or shadow. Since shadows sometimes mean predators, that can be valuable information. The new, improved type of organism will, therefore, be more common in the next generation. That’s natural selection. Repeated over billions of years, this process of incremental improvement should allow for the gradual emergence of organisms that are exquisitely adapted to their environments and that look for all the world as though they were designed. By 1870, about a decade after “The Origin of Species” was published, nearly all biologists agreed that life had evolved, and by 1940 or so most agreed that natural selection was a key force driving this evolution.

Advocates of intelligent design point to two developments that in their view undermine Darwinism. The first is the molecular revolution in biology. Beginning in the nineteen-fifties, molecular biologists revealed a staggering and unsuspected degree of complexity within the cells that make up all life. This complexity, I.D.’s defenders argue, lies beyond the abilities of Darwinism to explain. Second, they claim that new mathematical findings cast doubt on the power of natural selection. Selection may play a role in evolution, but it cannot accomplish what biologists suppose it can.

These claims have been championed by a tireless group of writers, most of them associated with the Center for Science and Culture at the Discovery Institute, a Seattle-based think tank that sponsors projects in science, religion, and national defense, among other areas. The center’s fellows and advisers—including the emeritus law professor Phillip E. Johnson, the philosopher Stephen C. Meyer, and the biologist Jonathan Wells—have published an astonishing number of articles and books that decry the ostensibly sad state of Darwinism and extoll the virtues of the design alternative. But Johnson, Meyer, and Wells, while highly visible, are mainly strategists and popularizers. The scientific leaders of the design movement are two scholars, one a biochemist and the other a mathematician. To assess intelligent design is to assess their arguments.

Michael J. Behe, a professor of biological sciences at Lehigh University (and a senior fellow at the Discovery Institute), is a biochemist who writes technical papers on the structure of DNA. He is the most prominent of the small circle of scientists working on intelligent design, and his arguments are by far the best known. His book “Darwin’s Black Box” (1996) was a surprise best-seller and was named by National Review as one of the hundred best nonfiction books of the twentieth century. (A little calibration may be useful here; “The Starr Report” also made the list.)

Not surprisingly, Behe’s doubts about Darwinism begin with biochemistry. Fifty years ago, he says, any biologist could tell stories like the one about the eye’s evolution. But such stories, Behe notes, invariably began with cells, whose own evolutionary origins were essentially left unexplained. This was harmless enough as long as cells weren’t qualitatively more complex than the larger, more visible aspects of the eye. Yet when biochemists began to dissect the inner workings of the cell, what they found floored them. A cell is packed full of exceedingly complex structures—hundreds of microscopic machines, each performing a specific job. The “Give me a cell and I’ll give you an eye” story told by Darwinists, he says, began to seem suspect: starting with a cell was starting ninety per cent of the way to the finish line.

Behe’s main claim is that cells are complex not just in degree but in kind. Cells contain structures that are “irreducibly complex.” This means that if you remove any single part from such a structure, the structure no longer functions. Behe offers a simple, nonbiological example of an irreducibly complex object: the mousetrap. A mousetrap has several parts—platform, spring, catch, hammer, and hold-down bar—and all of them have to be in place for the trap to work. If you remove the spring from a mousetrap, it isn’t slightly worse at killing mice; it doesn’t kill them at all. So, too, with the bacterial flagellum, Behe argues. This flagellum is a tiny propeller attached to the back of some bacteria. Spinning at more than twenty thousand r.p.m.s, it motors the bacterium through its aquatic world. The flagellum comprises roughly thirty different proteins, all precisely arranged, and if any one of them is removed the flagellum stops spinning.

In “Darwin’s Black Box,” Behe maintained that irreducible complexity presents Darwinism with “unbridgeable chasms.” How, after all, could a gradual process of incremental improvement build something like a flagellum, which needs all its parts in order to work? Scientists, he argued, must face up to the fact that “many biochemical systems cannot be built by natural selection working on mutations.” In the end, Behe concluded that irreducibly complex cells arise the same way as irreducibly complex mousetraps—someone designs them. As he put it in a recent Times Op-Ed piece: “If it looks, walks, and quacks like a duck, then, absent compelling evidence to the contrary, we have warrant to conclude it’s a duck. Design should not be overlooked simply because it’s so obvious.” In “Darwin’s Black Box,” Behe speculated that the designer might have assembled the first cell, essentially solving the problem of irreducible complexity, after which evolution might well have proceeded by more or less conventional means. Under Behe’s brand of creationism, you might still be an ape that evolved on the African savanna; it’s just that your cells harbor micro-machines engineered by an unnamed intelligence some four billion years ago.

But Behe’s principal argument soon ran into trouble. As biologists pointed out, there are several different ways that Darwinian evolution can build irreducibly complex systems. In one, elaborate structures may evolve for one reason and then get co-opted for some entirely different, irreducibly complex function. Who says those thirty flagellar proteins weren’t present in bacteria long before bacteria sported flagella? They may have been performing other jobs in the cell and only later got drafted into flagellum-building. Indeed, there’s now strong evidence that several flagellar proteins once played roles in a type of molecular pump found in the membranes of bacterial cells.

Behe doesn’t consider this sort of “indirect” path to irreducible complexity—in which parts perform one function and then switch to another—terribly plausible. And he essentially rules out the alternative possibility of a direct Darwinian path: a path, that is, in which Darwinism builds an irreducibly complex structure while selecting all along for the same biological function. But biologists have shown that direct paths to irreducible complexity are possible, too. Suppose a part gets added to a system merely because the part improves the system’s performance; the part is not, at this stage, essential for function. But, because subsequent evolution builds on this addition, a part that was at first just advantageous might become essential. As this process is repeated through evolutionary time, more and more parts that were once merely beneficial become necessary. This idea was first set forth by H. J. Muller, the Nobel Prize-winning geneticist, in 1939, but it’s a familiar process in the development of human technologies. We add new parts like global-positioning systems to cars not because they’re necessary but because they’re nice. But no one would be surprised if, in fifty years, computers that rely on G.P.S. actually drove our cars. At that point, G.P.S. would no longer be an attractive option; it would be an essential piece of automotive technology. It’s important to see that this process is thoroughly Darwinian: each change might well be small and each represents an improvement.

Design theorists have made some concessions to these criticisms. Behe has confessed to “sloppy prose” and said he hadn’t meant to imply that irreducibly complex systems “by definition” cannot evolve gradually. “I quite agree that my argument against Darwinism does not add up to a logical proof,” he says—though he continues to believe that Darwinian paths to irreducible complexity are exceedingly unlikely. Behe and his followers now emphasize that, while irreducibly complex systems can in principle evolve, biologists can’t reconstruct in convincing detail just how any such system did evolve.

What counts as a sufficiently detailed historical narrative, though, is altogether subjective. Biologists actually know a great deal about the evolution of biochemical systems, irreducibly complex or not. It’s significant, for instance, that the proteins that typically make up the parts of these systems are often similar to one another. (Blood clotting—another of Behe’s examples of irreducible complexity—involves at least twenty proteins, several of which are similar, and all of which are needed to make clots, to localize or remove clots, or to prevent the runaway clotting of all blood.) And biologists understand why these proteins are so similar. Each gene in an organism’s genome encodes a particular protein. Occasionally, the stretch of DNA that makes up a particular gene will get accidentally copied, yielding a genome that includes two versions of the gene. Over many generations, one version of the gene will often keep its original function while the other one slowly changes by mutation and natural selection, picking up a new, though usually related, function. This process of “gene duplication” has given rise to entire families of proteins that have similar functions; they often act in the same biochemical pathway or sit in the same cellular structure. There’s no doubt that gene duplication plays an extremely important role in the evolution of biological complexity.

It’s true that when you confront biologists with a particular complex structure like the flagellum they sometimes have a hard time saying which part appeared before which other parts. But then it can be hard, with any complex historical process, to reconstruct the exact order in which events occurred, especially when, as in evolution, the addition of new parts encourages the modification of old ones. When you’re looking at a bustling urban street, for example, you probably can’t tell which shop went into business first. This is partly because many businesses now depend on each other and partly because new shops trigger changes in old ones (the new sushi place draws twenty-somethings who demand wireless Internet at the café next door). But it would be a little rash to conclude that all the shops must have begun business on the same day or that some Unseen Urban Planner had carefully determined just which business went where.

The other leading theorist of the new creationism, William A. Dembski, holds a Ph.D. in mathematics, another in philosophy, and a master of divinity in theology. He has been a research professor in the conceptual foundations of science at Baylor University, and was recently appointed to the new Center for Science and Theology at Southern Baptist Theological Seminary. (He is a longtime senior fellow at the Discovery Institute as well.) Dembski publishes at a staggering pace. His books—including “The Design Inference,” “Intelligent Design,” “No Free Lunch,” and “The Design Revolution”—are generally well written and packed with provocative ideas.

According to Dembski, a complex object must be the result of intelligence if it was the product neither of chance nor of necessity. The novel “Moby Dick,” for example, didn’t arise by chance (Melville didn’t scribble random letters), and it wasn’t the necessary consequence of a physical law (unlike, say, the fall of an apple). It was, instead, the result of Melville’s intelligence. Dembski argues that there is a reliable way to recognize such products of intelligence in the natural world. We can conclude that an object was intelligently designed, he says, if it shows “specified complexity”—complexity that matches an “independently given pattern.” The sequence of letters “jkxvcjudoplvm” is certainly complex: if you randomly type thirteen letters, you are very unlikely to arrive at this particular sequence. But it isn’t specified: it doesn’t match any independently given sequence of letters. If, on the other hand, I ask you for the first sentence of “Moby Dick” and you type the letters “callmeishmael,” you have produced something that is both complex and specified. The sequence you typed is unlikely to arise by chance alone, and it matches an independent target sequence (the one written by Melville). Dembski argues that specified complexity, when expressed mathematically, provides an unmistakable signature of intelligence. Things like “callmeishmael,” he points out, just don’t arise in the real world without acts of intelligence. If organisms show specified complexity, therefore, we can conclude that they are the handiwork of an intelligent agent.

For Dembski, it’s telling that the sophisticated machines we find in organisms match up in astonishingly precise ways with recognizable human technologies. The eye, for example, has a familiar, cameralike design, with recognizable parts—a pinhole opening for light, a lens, and a surface on which to project an image—all arranged just as a human engineer would arrange them. And the flagellum has a motor design, one that features recognizable O-rings, a rotor, and a drive shaft. Specified complexity, he says, is there for all to see.

Dembski’s second major claim is that certain mathematical results cast doubt on Darwinism at the most basic conceptual level. In 2002, he focussed on so-called No Free Lunch, or N.F.L., theorems, which were derived in the late nineties by the physicists David H. Wolpert and William G. Macready. These theorems relate to the efficiency of different “search algorithms.” Consider a search for high ground on some unfamiliar, hilly terrain. You’re on foot and it’s a moonless night; you’ve got two hours to reach the highest place you can. How to proceed? One sensible search algorithm might say, “Walk uphill in the steepest possible direction; if no direction uphill is available, take a couple of steps to the left and try again.” This algorithm insures that you’re generally moving upward. Another search algorithm—a so-called blind search algorithm—might say, “Walk in a random direction.” This would sometimes take you uphill but sometimes down. Roughly, the N.F.L. theorems prove the surprising fact that, averaged over all possible terrains, no search algorithm is better than any other. In some landscapes, moving uphill gets you to higher ground in the allotted time, while in other landscapes moving randomly does, but on average neither outperforms the other.

Now, Darwinism can be thought of as a search algorithm. Given a problem—adapting to a new disease, for instance—a population uses the Darwinian algorithm of random mutation plus natural selection to search for a solution (in this case, disease resistance). But, according to Dembski, the N.F.L. theorems prove that this Darwinian algorithm is no better than any other when confronting all possible problems. It follows that, over all, Darwinism is no better than blind search, a process of utterly random change unaided by any guiding force like natural selection. Since we don’t expect blind change to build elaborate machines showing an exquisite coördination of parts, we have no right to expect Darwinism to do so, either. Attempts to sidestep this problem by, say, carefully constraining the class of challenges faced by organisms inevitably involve sneaking in the very kind of order that we’re trying to explain—something Dembski calls the displacement problem. In the end, he argues, the N.F.L. theorems and the displacement problem mean that there’s only one plausible source for the design we find in organisms: intelligence. Although Dembski is somewhat noncommittal, he seems to favor a design theory in which an intelligent agent programmed design into early life, or even into the early universe. This design then unfolded through the long course of evolutionary time, as microbes slowly morphed into man.

Dembski’s arguments have been met with tremendous enthusiasm in the I.D. movement. In part, that’s because an innumerate public is easily impressed by a bit of mathematics. Also, when Dembski is wielding his equations, he gets to play the part of the hard scientist busily correcting the errors of those soft-headed biologists. (Evolutionary biology actually features an extraordinarily sophisticated body of mathematical theory, a fact not widely known because neither of evolution’s great popularizers—Richard Dawkins and the late Stephen Jay Gould—did much math.) Despite all the attention, Dembski’s mathematical claims about design and Darwin are almost entirely beside the point.

The most serious problem in Dembski’s account involves specified complexity. Organisms aren’t trying to match any “independently given pattern”: evolution has no goal, and the history of life isn’t trying to get anywhere. If building a sophisticated structure like an eye increases the number of children produced, evolution may well build an eye. But if destroying a sophisticated structure like the eye increases the number of children produced, evolution will just as happily destroy the eye. Species of fish and crustaceans that have moved into the total darkness of caves, where eyes are both unnecessary and costly, often have degenerate eyes, or eyes that begin to form only to be covered by skin—crazy contraptions that no intelligent agent would design. Despite all the loose talk about design and machines, organisms aren’t striving to realize some engineer’s blueprint; they’re striving (if they can be said to strive at all) only to have more offspring than the next fellow.

Another problem with Dembski’s arguments concerns the N.F.L. theorems. Recent work shows that these theorems don’t hold in the case of co-evolution, when two or more species evolve in response to one another. And most evolution is surely co-evolution. Organisms do not spend most of their time adapting to rocks; they are perpetually challenged by, and adapting to, a rapidly changing suite of viruses, parasites, predators, and prey. A theorem that doesn’t apply to these situations is a theorem whose relevance to biology is unclear. As it happens, David Wolpert, one of the authors of the N.F.L. theorems, recently denounced Dembski’s use of those theorems as “fatally informal and imprecise.” Dembski’s apparent response has been a tactical retreat. In 2002, Dembski triumphantly proclaimed, “The No Free Lunch theorems dash any hope of generating specified complexity via evolutionary algorithms.” Now he says, “I certainly never argued that the N.F.L. theorems provide a direct refutation of Darwinism.”

Those of us who have argued with I.D. in the past are used to such shifts of emphasis. But it’s striking that Dembski’s views on the history of life contradict Behe’s. Dembski believes that Darwinism is incapable of building anything interesting; Behe seems to believe that, given a cell, Darwinism might well have built you and me. Although proponents of I.D. routinely inflate the significance of minor squabbles among evolutionary biologists (did the peppered moth evolve dark color as a defense against birds or for other reasons?), they seldom acknowledge their own, often major differences of opinion. In the end, it’s hard to view intelligent design as a coherent movement in any but a political sense.

It’s also hard to view it as a real research program. Though people often picture science as a collection of clever theories, scientists are generally staunch pragmatists: to scientists, a good theory is one that inspires new experiments and provides unexpected insights into familiar phenomena. By this standard, Darwinism is one of the best theories in the history of science: it has produced countless important experiments (let’s re-create a natural species in the lab—yes, that’s been done) and sudden insight into once puzzling patterns (that’s why there are no native land mammals on oceanic islands). In the nearly ten years since the publication of Behe’s book, by contrast, I.D. has inspired no nontrivial experiments and has provided no surprising insights into biology. As the years pass, intelligent design looks less and less like the science it claimed to be and more and more like an extended exercise in polemics.

In 1999, a document from the Discovery Institute was posted, anonymously, on the Internet. This Wedge Document, as it came to be called, described not only the institute’s long-term goals but its strategies for accomplishing them. The document begins by labelling the idea that human beings are created in the image of God “one of the bedrock principles on which Western civilization was built.” It goes on to decry the catastrophic legacy of Darwin, Marx, and Freud—the alleged fathers of a “materialistic conception of reality” that eventually “infected virtually every area of our culture.” The mission of the Discovery Institute’s scientific wing is then spelled out: “nothing less than the overthrow of materialism and its cultural legacies.” It seems fair to conclude that the Discovery Institute has set its sights a bit higher than, say, reconstructing the origins of the bacterial flagellum.

The intelligent-design community is usually far more circumspect in its pronouncements. This is not to say that it eschews discussion of religion; indeed, the intelligent-design literature regularly insists that Darwinism represents a thinly veiled attempt to foist a secular religion—godless materialism—on Western culture. As it happens, the idea that Darwinism is yoked to atheism, though popular, is also wrong. Of the five founding fathers of twentieth-century evolutionary biology—Ronald Fisher, Sewall Wright, J. B. S. Haldane, Ernst Mayr, and Theodosius Dobzhansky—one was a devout Anglican who preached sermons and published articles in church magazines, one a practicing Unitarian, one a dabbler in Eastern mysticism, one an apparent atheist, and one a member of the Russian Orthodox Church and the author of a book on religion and science. Pope John Paul II himself acknowledged, in a 1996 address to the Pontifical Academy of Sciences, that new research “leads to the recognition of the theory of evolution as more than a hypothesis.” Whatever larger conclusions one thinks should follow from Darwinism, the historical fact is that evolution and religion have often coexisted. As the philosopher Michael Ruse observes, “It is simply not the case that people take up evolution in the morning, and become atheists as an encore in the afternoon.”

Biologists aren’t alarmed by intelligent design’s arrival in Dover and elsewhere because they have all sworn allegiance to atheistic materialism; they’re alarmed because intelligent design is junk science. Meanwhile, more than eighty per cent of Americans say that God either created human beings in their present form or guided their development. As a succession of intelligent-design proponents appeared before the Kansas State Board of Education earlier this month, it was possible to wonder whether the movement’s scientific coherence was beside the point. Intelligent design has come this far by faith.