Tuesday, January 30, 2007

by John Lye, from here

One of the positions taken by post-structuralist theorists is that the author is dead. As is the case with most theoretical positions, the first task of the reader should be to understand as fully as possible what the issues really are. It's easy to short-circuit a theoretical position if you don't unfold it, don't see what the terms and the implications are. We can easily say “of course there was an author and she knew what she was doing—look at the multiple drafts, the letters to her friends, and so forth.” But that response to the possibilities of interrogating, or problematizing, the existence of the author is probably not helpful:

1. One may take it as read that the person doing the theorizing has probably already thought through the dismissal of her or his proposition, and has something further in mind. It isn't a difficult thought to have, and it is unlikely that the position ultimately holds that there was not in fact such a person as William Faulkner, say, who sat down and wrote As I Lay Dying.

2. Such quick dismissal overlooks the fact that critics tend to operate as if the author did not know what she was doing, even if she says that she does, and for very good reasons which need to be explored -- for instance because she could not be aware of her social or cultural ideological environment, or fully aware of how profoundly she was influenced by her own personal or cultural experiences, or of how her subconscious was seeing and constructing relationships, or of what implications the genre she was writing in had for the eventual meaning of what she had to say.

3. Further, this dismissal begs the question of interpretation and meaning pretty entirely. How do we know the author meant to mean what we think we know she meant? How can we guarantee (and should we?) that we are reading the text 'properly' as the author would have had us read it? The issues of interpretation and meaning are pretty big issues to beg. If it were clear what Shakespeare 'meant' by Hamlet, we wouldn't have hundreds of articles and books disagreeing with each other about it. As we do have multiple, and differing, interpretations, it should be clear that we do not, in fact, know what he 'meant', nor have we agreed on how we could find out.

4. We get caught in a fruitless circle: we construct an author out of our reading of her (usually we don't know her personally, and it's pretty tough to really know anyone in any case), and then we say we know she knew what she was doing, because she did exactly what we imagine our reconstruction of her 'predicted'. The author is 'in' the text only insofar as we try to read her 'out' of it. This is not to say that a knowledge of an author's life cannot illumine a text, but at the very same time that illumination forecloses the text, cuts off possible meanings which lie inherent in (or, implicit in) the structure of language, images, ideas in the text, and critics have been quite free to decide when an author's life 'matters' and when it doesn't. One of the strikes against autobiographies and biographies as guides to an author's thought and meanings is that they themselves are writing, conforming to certain conventions, constructing a plot-line from the intricate and intermingled complexities of an inner and outer life.

5. In simple empirical historical terms, we have empirical evidence that authors read their own works differently at different times in their life, and that there are authorial readings which strike all of the readers as just plain dumb, or missing the boat in some way -- as Lawrence said, trust the tale, not the teller.

6. In fact we don't fully know what it means to be 'an author' -- that is, what creativity really is or where it comes from, and whether it is many things or one thing. We do know that the creative process seems often to 'take over' the original intentions and meanings of the author, and in past days this phenomenon has been put down to inspiration by divine forces and so forth -- the author is 'possessed' by a muse, for instance.

7. And lastly, even if we knew something of what creativity was, we still need to know what the relation between an 'individual's' meaning and the social meanings which have constructed her life are: how much of someone's meaning is their culture's meaning? Where do you mark a difference?

The idea of the author's disappearance has a long history in the century -- it isn't a newfangled concept. Among the people who advocated the disappearance of the author from the text was James Joyce, but modernism in general has stressed that the text stands apart from and is different from the author, and modernism has endorsed the idea that literature is an intertextual phenomenon, that texts mean in relation to other texts, not in relation to the lives of the author. One of the chief theorizations of modernism, New Criticism, speaks of attempting to find the author in the work or the work through the author as the 'Intentional Fallacy'. It is not a long step from the modernist position of the retreat or disappearance of the author to the idea that the concept of the author as a concept through which to read and understand literature has lost its salience and validity and is more likely to mislead than to illumine.

A quite different tradition, that of phenomenological hermeneutics, suggested that the author is radically disengaged from the interpretive process, that "the book divides the act of writing and the act of reading into two sides, between which there is no communication" (Paul Ricoeur, "What Is a Text?") This tradition is a main support of one of the most influential of the Reader-Response theories. As Ricoeur, commenting on the fact that writing separates the writer from the reader, remarked, "Sometimes I like to say that to read a book is to consider its author as already dead and the book as posthumous."

Contemporary theorists have a number of reasons further to those above for thinking that the concept of 'the author' is not a profitable concept. Here are some of the reasons for that, based loosely, in part, on the Roland Barthes' essay "The Death of the Author" in his collection of essays Image, Music, Text, and on other sources.

1. There are a number of theories of language and grammar which militate against a text (note the shift from 'a work') being written by an intentional individual in possession of his meanings.

* Following the work of Emile Beneviste, the idea that the "I" of a sentence such as "I went to the store" is a different entity from the subject who speaks the sentence. The "I" within the sentence is an 'empty' marker -- anyone could say the sentence, and it would 'mean' them. The subject who enunciates isn't the same as the subject of enunciation (the "I" in the sentence), doesn't, as it were, get in to the sentence except as an accident of propinquity.

* Following the work of Ferdinand de Saussure, the idea that meaning does not belong to words but to i) the field of meaning in which it occurs and the differences from other words in that meaning-field, so that meaning is by difference, not by identity: this is the 'paradigmatic' placement of language; and ii) to the placement of the word in the grammar of a sentence: the 'syntactic' placement of language. iii) A third placement was added, for instance by Umberto Eco, the placement of context, in that words change according to their context: this is the 'pragmatic' placement. For the purposes of the death of the author, the functionality of language which is most important is the first mentioned, the idea that meaning is created through difference, not through identity. The effect is to place any language use within a broad frame of language-use, in which language is an independent system.

* This has led to the idea, articulated by the philosopher Heidegger and others, that humans do not speak language, language speaks us. As we acquire language we enter a flow of meaning which has several at least two broad configurations: i) language as an independent system of differentiations; ii) language as as a storehouse of cultural meaning, so that Foucault can speak of stepping into the flow of meaning, Lacan of our entering, through language, into the Law of the Father, the rule of the governing conceptions of our culture. The 'intended' meanings of an 'author' are subsumed under languages' real ways of meaning, and the centrality of pre-existing fields of meaning to our very being as (inevitably, culturally-formed) 'individuals'. It can be argued that there is no such thing as 'personal' meaning (there can be personal experiences, but when we assign meaning to those experiences, that meaning is only shared, only cultural), it can be argued that any subject who enunciates is only a creation of language itself, it can be argued that meaning belongs to the play of language itself and is far beyond our control. All these things militate against the privileging of an 'author' in reference to a text.

2. Following on the ideas of language and meaning, contemporary theorists suggest that any piece of writing is in fact a complex web of cultural meanings, a texture of them, a text. A text only means because there are strands of meaning leading to all sorts of areas of experience and language use: particularly to the conventions of writing (e.g. how things are expressed in writing, what is expressed, how different topics are written about), to previous writing, to the archive of cultural meanings and instances of their use, to the way we speak about various aspects of our lives and experience. Any text is necessarily intertextual, it does not have boundaries but has filiations, connections, instead. An 'author' exists as a cultural process, what Barthes calls a scriptor, Foucault an author-function. Foucault writes that "[a text] indicates itself, constructs itself, only on the basis of a complex field of discourse." and as Barthes writes,

"In the multiplicity of writing, everything is disentangled, nothing deciphered; the structure can be followed, 'run' (like the thread of a stocking*) at every point and at every level, but there is nothing beneath: the space of writing is to be ranged over, not pierced...." -- see also Barthes' "From Work to Text", in the same volume.

*[Women's transparent silk or synthetic stockings used to 'run', a line of thread unthreaded: why women's dis/appareled legs should appear in this text is an interesting question, and an illustration of the unruliness of language -- "ranged over," indeed!]

3. Contemporary theories of narrative suggest that a narrative is an 'intransitive' function, that is, it does not set out to do anything; as such it, like the "I" as subject of enunciation, is separated from the particular circumstances of its articulation, exists as meaning-potential, the potential to be actualized within the meaning-realm of the reader. Insofar as any meaning is to be made, it is made by the reader, not by the 'author'.

4. The very concept of the stable ego has itself been challenged -- it has been suggested that our 'selves' as entities, as discrete and identifiable beings, is a cultural concept, that 'we' are in fact processes of symbolization, not stable beings. We 'occupy' different realms of meaning (this is known as the de-centered ego); we are produced by language, or by symbolization in various forms (the focus of much post-structural thought on language as the sole means of and process of symbolization may be a limitation) and as such exist as unstable vortices of meaningfulness; much of what we may mean or be is generated by forces which have been repressed and are experiences in displaced ways, so that Lacan has suggested that who we really are in our unconscious, not our highly modulated and culturally controlled conscious selves.

5. The very idea of, and the centrality to our culture of, 'the individual', has been seen as an ideological conception, a product of the capitalist revolution in the late seventeenth century. In support of this idea one might note that the "rights of the individual" were not theorized until the eighteenth century, and wonder whether humankind just hadn't been bright enough to think of them until then, or if the centrality of the concept 'the individual' is an historical phenomenon. There are various claims that the idea of "the author" was not a significant concept before this time. In partial support of this it might be noted that the use of the word "original" in a positive sense to refer to 'authored' texts, paintings, etc., did not occur until the late eighteenth century, not long after, for instance, the emergence of the idea that individual actors might give their own interpretations of roles.

It is still possible for many people to write off all these questions, to dismiss them. It isn't very helpful to do so. ‘Common sense’ may say that of course there is a unitary author who arrived at his own meanings and put them into highly communicable form in a poem and sent them to us. But anyone who has thought for a minute about what common sense has told people throughout human civilization should be very wary of common sense, because it’s probably your learned, unexamined patterns of thought talking to you. And in any case in a university we are responsible for more than common sense. We are responsible for the charted and uncharted implications of human thought and action, and so we must proceed with openness and care. And no one should be satisfied with that ‘common sense’ response who has thought very carefully about the nature of authorship, the construction of meaning, creativity, the nature of art, the relation between meaning and interpretation, or the problems of hermeneutics, that is, interpreting over time, class and region. Ultimately it doesn’t explain enough of what happens and most importantly doesn't open up (indeed tends to close off) the problems that the concept of authorship raises.

I hope by now it is clear that I do not mean, and the theorists do not mean, that there was not a person who thought, felt, planned and wrote. What is in question is what is meant by authorship, and the assumption that the meaning of a work is the product of a single self-determining author, in control of his meanings, who fulfills his intentions and only his intentions. In the jargon of contemporary theory the concept of the author is a 'contested' concept, and our first task is to 'problematize' it, to see what the issues are which have led it to be contested. To contest something is to question its common usage and to scrutinize it for its full but often obscured meanings and implications.
by Roland Barthes

In his story "Sarrasine" Balzac, describing a castrato disguised as a woman, writes the following sentence: "This was woman herself, with her sudden fears, her irrational whims, her instinctive worries, her impetuous boldness, her fussings, and her delicious sensibility." Who is speaking thus? Is it the hero of the story bent on remaining ignorant of the castrato hidden beneath the woman? Is it Balzac the individual, furnished by his personal experience with a philosophy of Woman? Is it Balzac the author professing "literary" ideas on femininity? Is it universal wisdom? Romantic psychology? We shall never know, for the good reason that writing is the destruction of every voice, of every point of origin. Writing is the neutral, composite, oblique space where our subject slips away, the negative where all identity is lost, starting with the very identity of the body writing.

No doubt it has always been that way. As soon as a fact is narrated no longer with a view to acting directly on reality but intransitively, that is to say, finally outside of any function other than that of the very practice of the symbol itself, this disconnection occurs, the voice loses its origin, the author enters into his own death, writing begins. The sense of this phenomenon, however, is varied; in ethnographic societies the responsibility for a narrative is never assumed by a person but by a mediator, shaman, or relator whose "performance" - the mastery of the narrative code - may possibly be admired but never his "genius." The author is a modern figure, a product of our society insofar as, emerging from the Middle Ages with English empiricism, French rationalism and the personal faith of the Reformation, it discovered the prestige of the individual, of, as it is more nobly put, the "human person." It is thus logical that in literature it should be this positivism, the epitome and culmination of capitalist ideology, which has attached the greatest importance to the "person" of the author. The author still reigns in histories of literature, biographies of writers, interviews, magazines, as in the very consciousness of men of letters anxious to unite their person and their work through diaries and memoirs. The image of literature to be found in ordinary culture is tyrannically centered on the author, his person, his life, his tastes, his passions, while criticism still consists for the most part in saying that Baudelaire's work is the failure of Baudelaire the man, van Gogh's his madness, Tchaikovsky's his vice. The explanation of a work is always sought in the man or woman who produced it, as if it were always in the end, through the more or less transparent allegory of the fiction, the voice of a single person, the author "confiding" in us.

Though the sway of the Author remains powerful (the New Criticism has often done no more than consolidate it), it goes without saying that certain writers have long since attempted to loosen it. [Barthes here discusses French-language authors, like Mallarmé and Proust, who have insisted that language speaks, not the author. Surrealism also desacralized the image of the Author.] Linguistically, the author is never more than the instance writing, just as I is nothing other than the instance saying I: language knows a "subject," not a "person," and this subject, empty outside of the very enunciation which defines it, suffices to make language "hold together," suffices, that is to say, to exhaust it.

The removal of the Author [. . .] is not merely an historical fact or an act of writing; it utterly transforms the modern text (or - which is the same thing - the text is henceforth made and read in such a way that at all its levels the author is absent). The temporality is different. The Author, when believed in, is always conceived of as the past of his own book: book and author stand automatically on a single line divided into a before and an after. The Author is though to nourish the book, which is to say that he exists before it, thinks, suffers, lives for it, is in the same relation of antecedence to his work as a father to his child. In complete contrast, the modern scriptor is born simultaneously with the text, is in no way equipped with a being preceding or exceeding the writing, is not the subject with the book as predicate; there is no other time than that of the enunciation and every text is eternally written here and now. The fact is (or, it follows) that writing can no longer designate an operation of recording, notation, representation, "depiction" (as the classics would say); rather, it designates exactly what linguists, referring to Oxford philosophy, call a performative, a rare verbal form (exclusively given in the first person and in the present tense) in which the enunciation has no other content (contains no other proposition) than the act by which it is uttered - something like the I declare of kings of the I sing of very ancient poets. [. . .]

We know now that a text is not a line of words releasing a single "theological" meaning (the "message" of the Author-God) but a multidimensional space in which a variety of writings, none of them original, blend and clash. The text is a tissue of quotations drawn from the innumerable centers of culture. [. . .T]he writer can only imitate a gesture that is always anterior, never original. His only power is to mix writings, to counter the ones with the others, in such a way as never to rest on any one of them. [. . .] Succeeding the Author, the scriptor no longer bears within him passions, humors, feelings, impressions, but rather this immense dictionary from which he draws a writing that can know no halt: life never does more than imitate the book, and the book itself is only a tissue of signs, an imitation that is lost, infinitely deferred.

Once the Author is removed, the claim to decipher a text becomes quite futile. To give a text an Author is to impose a limit on that text, to furnish it with a final signified, to close the writing. Such a conception suits criticism very well, the latter then allotting itself the important task of discovering the Author (or its hypostases: soceity, history, psyche, liberty) beneath the work: when the Author has been found, the text is "explained" - victory to the critic. Hence there is no surprise in the fact that, historically, the reign of the Author has also been that of the Critic, nor again in the fact that criticism (be it new) is today undermined along with the Author. In the multiplicity of writing, everything is to be disentangled, nothing deciphered; the structure can be followed, "run" (like the thread of a stocking) at every point and at every level, but there is nothing beneath: the space of writing is to be ranged over, not pierced; writing ceaselessly posits menaing ceaselessly to evaporate it, carrying out a systematic exemption of meaning. In precisely this way literature (it would be better from now on to say writing), by refusing to assign a "secret," an ultimate meaning, to the text (and to the world as text), liberates what may be called an antitheological activity, an activity that is truly revolutionary since to refuse to fix meaning is, in the end, to refuse God and his hypostases - reason, science, law.

[Barthes goes back to Balzac, then cryptically refers to Greek tragedy.] Thus is revealed the total existence of writing: a text is made of multiple writings, drawn from many cultures and entering into mutual relations to dialogue, parody, contestation, but there is one place where this multiplicity is focused and that place is the reader, not, as was hitherto said, the author. The reader is the space on which all the quotations that make up a writing are inscribed without any of them being lost; a text's unity lies not in its origin but in its destination. Yet this destination cannot any longer be personal: the reader is without history, biography, psychology; he is simply that someone who holds together in a single field all the traces by which the written text is constituted. Which is why it is derisory to condemn the new writing in the name of a humanism hypocritically turned champion of the reader's rights. Classic criticism has never paid any attention to the reader; for it, the writer is the only person in literature. We are now beginning to let ourselves be fooled no longer by the arrogant antiphrastical recriminations of good society in favor of the very thing it sets aside, ignores, smothers, or destroys; we know that to give writing its future, it is necessary to overthrow the myth: the birth of the reader must be at the cost of the death of the Author.

(The Death of the Author by Roland Barthes (full text), as well as Style and the Representation of Historical Time by George Kubler and The Aesthetics of Silence by Susan Sontag)
by Matt Bailey, from here

Surprising as it may seem, Gaspar Noé may not be the most well-known member of his family. That honour belongs to his father, Luis Felipe Noé, a semi-abstract painter. Noé père was born in 1933 in Buenos Aires, Argentina, studied art and law, and worked as a journalist. Noé fils was born in 1963, two years before his family relocated to New York where his father was awarded a Guggenheim fellowship. Having returned to Buenos Aires in 1970, the family moved again, this time to Paris in 1976. In his teens, Gaspar Noé entered school at the École Nationale Supérieur Louis Lumière where he studied a rigorous program of cinema and photography.

Noé made two black-and-white short films upon finishing his studies, Tintarella di luna in 1985, and Pulpe amère in 1987. Tintarella di luna tells the simple story of a woman who leaves her husband for her lover. Pulpe amère shows a man attempting to rape his wife as they listen to a radio program of a man expressing his thoughts of profound love.

Noé's first major film was the 40-minute Carne (1991), produced through his partnership with filmmaker Lucile Hadzihalilovic (La Bouche de Jean-Pierre [1995], L'Ecole [2003]), Les Cinémas de la zone, in 1991. The film, a tale of a horse butcher who takes revenge on a man he mistakenly believes to have raped his autistic daughter, marks not only the first time Noé incorporated on-screen textual warnings, epigrams, and notes into the filmic narrative, but also the first collaboration with actor Philippe Nahon who has appeared in all of Noé's subsequent features. Carne also includes a number of 'shock' elements such as gunshot sound effects, loud martial chords on the soundtrack, and rapid editing that would become major characteristics of his style in the later Seul contre tous (1998). Between the shocks and warnings in the film, however, there are veins of delicious dark humour to be found: a montage showing the butcher chopping meat as intertitles mark the passage of year after year, and the mute daughter blankly watching Herschel Gordon Lewis' Blood Feast (1963) on television.

Despite the success of Carne – the film won the Best Short Award at the 1991 Cannes Film Festival – Noé was unable to acquire funding for his first feature. Noé recalls the response: “Everybody said, no, do a normal movie. Carne was too violent, now you have to calm down, you have to grow up. Why don't you do a genre film?”

In 1994, still unable to find funding, Noé created a half-hour experiment in mass hypnosis for Canal+ Television (Une experience d'hypnose télévisuelle, 1995). Later works for television include a public service announcement against hunting (with voice-over by Alain Delon) called Le Lâcheur d'animaux d'élevage, 1995), and a sexually explicit short called Sodomites (1998) for a series promoting condoms called A coups sûrs that also included shorts by Jacques Audiard, Marc Caro, Lucile Hadzihalilovic, and Cédric Klapisch.

Determined to make a feature and to make it his way, Noé eventually procured production money from the clothier Agnès B. and from Canal+ Television, who thought they were funding a short film (1). Over the course of four years, Noé completed his film, Seul contre tous, in which he attempted to depict France as he saw it. A France more akin to the France of Victor Hugo, Émile Zola, and Henri Charriere than the bourgeois, urbane France of the films of “more civilized filmmakers” (2).

Before I begin discussion of Noé's features, I would first like to introduce a theoretical framework for considering them based on the writings of Sergei Eisenstein and of several contemporary early film historians. The term ”cinema of attractions”, which originates with the writings of Eisenstein, has been in the parlance of film studies for at least a decade to describe a type of early cinema. It has proven useful, but the term, I think, can be redefined in the context of Eisenstein's ideas and applied to films other than those old flickers from a hundred years ago. In fact, there are filmmakers throughout film history (William Castle and Luis Buñuel to name just a couple) who have made consistent use of a system of attractions in their films. Gaspar Noé is one of these filmmakers. His major films – the short Carne, and the features Seul contre tous and Irréversible (2002) – exemplify modern takes on the cinema of attractions. What follows is an attempt to redefine the cinema of attractions to express more clearly the original intent of Eisenstein and how it might apply to the films of Gaspar Noé.

Noé has described Seul contre tous as “the tragedy of a jobless butcher (Philippe Nahon) struggling to survive in the bowels of the country”, but it is quite a bit more than that (3). Noé's goal in making the film was to create a film so confrontational and so in opposition with contemporary French cinema that it would be universally despised – a film to “dishonor France” (4). Noé has been asked if his film, in which the butcher expounds on the evils of women, homosexuals, blacks, Arabs, and the French with equal venom, is racist. His reply was in the affirmative: “Yes, it's an anti-French movie” (5). Noé's film is not just in opposition to mainstream French cinema, but to all French cinema, even the festival-oriented cinema of which he is a part. It is Noé's opinion that the “French film industry is very conservative, like the 19th century salons, a private club where six people decide which movies should and shouldn't be made” (6).

The film has been compared in tone to equally relentlessly downbeat films such as Taxi Driver (Martin Scorsese, 1975), Fox and His Friends (Rainer Werner Fassbinder, 1975), Los Olvidados (Luis Buñuel, 1950, Noé's favourite film), Saló (Pier Paolo Pasolini, 1975), Straw Dogs (Sam Peckinpah, 1971), and to the scathingly misanthropic novels of Louis-Ferdinand Céline. Like Taxi Driver, the film features an unremitting first person narration that takes us inside the head of the protagonist, yet explains little about his motivations. Like Saló, there is a frankness regarding sex and violence that would border on the pornographic if it were not for the fact that both films treat the subjects as inescapable and base a human function as defecation. To this extremely negative tone, the film adds a series of often randomly placed 'shocks' provided by the combination of the amplified sound of a gunshot on the soundtrack with abrupt camera movements (accelerated by skipframes) that move the framing into close-up. Noé has described the desired effect of these shocks as “like being electrified, like an epileptic seizure” (7).

The film also features a spare, militaristic score (which prompted one critic to remark that “just hearing the music made him want to call Amnesty International”) and recurrent Godardian intertitles that, when analysed in the context of the film, seem to make no sense and appear to be a spoof of those selfsame Godardian intertitles. Both formal elements seem to mock, albeit subtly, a certain style of didactic political filmmaking while taking advantage of the distancing effect of that style's tropes. As one might expect of a film with of such extremity (in tone, form, and content), the critical reception was equally extreme.

While Noé's goal may have been to make a universally despised film, he was not successful. What did result, however, was a near complete polarisation of the audience. A few critics responded with bewilderment, but most either loved it or hated it. At the 1998 Cannes Film Festival, the major shock regarding the film came not from the film itself but from the fact that it won the Critic's Week prize despite having gone relatively unseen at the festival (8). The film was also awarded the Prix de Jeunesse by young audiences but was withheld the honour due to the likelihood that the film would be rated X in France.

Of the positive reviews, most came primarily from critics based in America. J Hoberman of The Village Voice, who aptly characterised the film as “part circus stunt, part social tract”, also called the film “lacerating in its precision” (9). Gavin Smith, editor of Film Comment, wrote pieces for his own magazine and for The Village Voice praising the film's uncompromising vision, and Jonathan Rosenbaum of Chicago Reader classified the film as a “masterpiece.” Of the number of critics who hated the film, Cahiers du cinéma (who can usually be counted on to champion difficult films) called it “an exercise in hatred against women, homosexuals, and old people, among others,” and stated simply, “This is not a good film” (10). A reviewer for The Financial Times deemed the film “very nasty, very ugly, but possibly very educational” (11).

Several viewers responding to the film have been considerably less kind than even the critic from Cahiers has. One critic, Jeremy Heilman, of the website MovieMartyr.com and Time Out New York, felt the film was morally repugnant and stated that the film has certain visual pleasures, “but they don't redeem the depraved heart that beats at the center of the film.” Another web writer, Nathan Shumate of Cold Fusion Video so reviled the film that he “hated it to a degree that [he] didn't think possible” and stated that his time watching the film “could more profitably have been used carving random ZIP codes into [his] abdomen with a straight pin.”

I cite these reviews neither out of amusement nor solely out of a desire to provide context for the reception of the film. What is interesting about these reviews is not necessarily their take on the film but instead the extremity of their reaction to the film. What is it about the film that provokes such vitriol? For Noé, perhaps, the reaction the film received did not go quite far enough. In an interview with The Independent, he expressed regret that the film was not banned in France since that would have been an official stamp that he “had made something shocking” (12). This statement from Noé is important because it shows a desire to offend, to provoke, to affect the audience in a primal way. It shows in Noé a desire to traffic in a cinema of attractions.

The concept of a 'cinema of attractions' comes from recent scholarship on early film, most notably that of Tom Gunning. The term is generally used as a replacement for what was considered the derogatory term, 'primitive cinema'. Both of these terms are meant to provide a category opposite narrative cinema, but current scholarship, particularly that of Lea Jacobs and Ben Brewster, has tended to see early cinema in terms of an evolutionary continuum as opposed to a system of narrative and opposition to narrative. Regardless of the use of the term 'cinema of attractions', it has given film scholars a way to discuss a category of film that, to quote Gunning, “directly solicits spectator attention, inciting visual curiosity, and supplying pleasure through an exciting spectacle” (13). The term 'pleasure' in Gunning's definition, however, is somewhat problematic in that the desired response of the cinema of attractions is not always pleasure. He goes on to explain that some of the attractions contained in early cinema include “recreations of shocking or curious incidents”, including executions (14). Now, it may be argued whether or not witnessing an execution actually elicits pleasure, but perhaps a better way of defining a cinema of attractions is filmmaking that is intended to elicit a primal response from the spectator in some way apart from the narrative. In point of fact, this definition is actually much closer to the original intent of the originator of the term 'attractions' in relation to the performing arts, Sergei Eisenstein.

In his 1923 essay on theatre, “The Montage of Attractions”, Eisenstein proposed a system of 'attractions' – aggressive actions in the presentation of a theatrical work – that subjected the audience “to emotional or psychological influence… calculated to produce specific emotional shocks in the spectator” (15). These shocks were intended to undermine the absorption of the spectator into the narrative and to keep the spectator thinking objectively about what they were watching being performed on the stage. The idea came from the presentational performances of the Grand Guignol and the traditional circus – low forms of entertainment in opposition to the high art of realist representational theatre (16). The concept of attractions in theatre was not motivated merely by a desire to, as Gunning puts it, épater le bourgeois (17). It was motivated out of a desire to make the political message of the theatrical piece clearer, more direct, and without the trappings of narrative including melodrama, allegory, and audience identification with the characters or their situation. But how do the attractions of the stage translate into the attractions of the screen?

It is important to note that mere camera tricks and special effects in the cinema do not make for adequate attractions. The definition I have proposed requires a primal response such as fear, shock or laughter. Special effects such as contemporary computer generated effects are designed to expand the illusory capabilities of film, not to break the illusion. To return to Eisenstein, one of the methods he proposed of eliciting response from the spectator was to set off firecrackers underneath the seats (18). For a cinema of attractions to work, the attractions must use the devices and technologies available to the cinema.

Noé is often lumped in together with a group of contemporary directors based in France who make thematically and formally aggressive cinema: Claire Denis, Catherine Breillat, Michael Haneke and Jan Kounen, among others. Only Noé, however, consistently uses a systematic deployment of attractions outside of the narrative to elicit primal responses from the spectator. In Seul contre tous, the primary shocks come from the intermittent blasts of soundtrack noise that accompany quick reframings. There is absolutely no narrative reason for the noises; they exist to momentarily alarm the spectator, to jar them out of any desire for identification with the film's contemptible protagonist and narrator and to reinsert them in the narrative flow anew. Noé has called his film a comedy of extremism and has stated that his use at the end of the film of the 'fright break', a device lifted straight from William Castle's film Homicidal (1961), is a joke (19). Indeed, one would hardly expect anyone who has sat through the movie to get up and leave the room at that point before finding out how the film ends. The real joke on the audience is the false ending that follows the 'fright break'. The butcher does not really murder his daughter, he just molests her. Ha ha. The intended comedic effect is evident from the music. Once the false ending has finished and the butcher snaps out of his violent reverie, the plaintive strains of Pachelbel's “Canon”, that old wedding ceremony chestnut, rise on the soundtrack. Everything seems to be all right, but anyone with an ear for irony knows that Noé is playing a joke. As the butcher begins to molest his daughter, the camera tastefully averts its gaze and the film ends.

It is no wonder the film evoked such strong reactions from the audience. After a systematic series of shocks to the system, a Klaxon-accompanied 'fright break', a horrifying and gruesome false ending followed by an even more horrifying and morally gruesome ending one would either feel exhilarated or, as was critic Hal Hinson, “wrung out and wasted” (20). Hinson also compared the sensation of viewing the film with being hit in the stomach with a bowling ball.

Irréversible continues Noé's use of attractions to underscore the message of the film while standing apart from and even working against the narrative. Irréversible features a number of scenes that employ aural or visual amplifications or modifications in order to provoke an effect. Perhaps the most infamous of these is a scene where a man gets his head bashed in with a fire extinguisher, the sound of which is exorbitantly amplified and the sight of which is made more nauseating by the unflinching yet wildly gyrating camera. The pulsing score that accompanies the film contains, for the first sixty minutes of the film, a constant 27-hertz tone specifically designed by Noé to cause nausea in the audience (21). Finally, the film ends with a series of strobing alternating black and white frames that have a hypnotic, yet greatly disorienting and dizzying effect. The cinema of attractions employed by Noé in his latest film go so far beyond what he used in his debut feature that spectators were fainting at the beginning of the film at the Cannes premiere, perhaps from the vertiginously tilting opening titles that run backwards and the grinding, undulating bass tones that accompany them (22).

Because of the busy schedules of its internationally in-demand stars, Irréversible had to be shot very quickly. There was no script, only a basic outline, and nearly all of the dialogue was improvised. Noé shot the film, largely by himself, in Super 16 mm with a compact camera. While his camerawork is the source of some of the film's visceral effects, particularly during its opening and closing scenes, much of the effects with the most impact were added during the film's extensive post-production.

Noé had the film transferred from film to high definition digital video where the obvious special effects (the initial murder sequence) as well as dozens of subtle or invisible effects were added by Noé in association with the digital artists at Mac Guff Ligne. Some examples of the latter type are the scenes of Marcus' rampage through the sex club which was composed of thirty separate shots strung together digitally, delirious zooms and reframings in the scene that follows where Marcus tries to track down information on Le Tenia, digitally-added shakiness to the camera during the police interrogation scene, an added visual analogue to the restless heartbeat on the soundtrack during the scene of Alex's recovery from the underground tunnel, and added blood, wounds, and body parts to the now notorious rape scene.

While Irréversible in essence recapitulates the rape revenge scenario of Carne and is similar in tone to Seul contre tous, the film is largely quite different from both. Whereas Seul contre tous depends primarily on the torrent of rage delivered through the butcher's internal monologue, Irréversible is an almost wholly visual experience. In accordance with its improvised nature, there are only a few lines in the film of any importance to understanding the events of the film. The visual experience, with all of its dazzling technical proficiency, ability to shock, and storytelling qualities, is the film. Although one can hardly imagine Irréversible as a silent film, it could very well be one with almost no loss of narrative information or, indeed, propensity to stun, so strong is Noé's command of visual communication. Despite the faintings and hyperbolic critical reactions (positive or negative), there is more to Noé's project than just shock.

The goal of attractions, as stated by Eisenstein and as employed by Noé, is not merely to shock. Noé himself has stated that shocking an audience is too easy and that he is interested in inducing a kind of trance state in the audience so that they can receive the ideas of the film more clearly, acting as a catalyst to release reactions good or bad (23).

It is too early to tell whether Noé will continue to expand upon his mastery of the cinema of attractions or whether he will abandon it for experimentation in other areas or perhaps even abandon it in favour of relatively conventional filmmaking. What is evident, however, is that even at this early stage of his career, Noé has taken the tools cinema makes available and used them to elicit very strong, primal reactions from his audiences. The question he seems to leave to his critics, and to history is, is it enough?

Saturday, January 27, 2007

from here

An upper middle class family — mother, father, young son, and dog — travel to their country home, boat in tow. Soon after they arrive, their household is invaded by a pair of young men in tennis whites who hold them hostage and torture and degrade them, both physically and psychologically. Relentless in its vision of brutality, the film questions the sanity and sensibility of the audience that pays to sit through such a display of human crudity and baseness.

Didn't we hash all this out for ourselves about the time Wes Craven made Last House on the Left? The key difference between Funny Games and Last House is that Craven, who would later become the undisputed king of both multiplex horror and the reflexive slasher film, made a straightforward B-movie that catered to the grindhouse crowd. Haneke's self-consciously bleak film (which is, incidentally, about one-tenth as disturbing as Craven's "straight" version) is an implicit criticism of its own audience.

Essentially, Funny Games plunders the nastiest bits from Straw Dogs, A Clockwork Orange, and I Spit On Your Grave and fashions them into a film that eschews context and any sense of closure in favor of a metacinematic style that, apparently, seeks to identify the audience with the psychopaths. The first time one of the killers (the smart, handsome one) tossed a wink back over his shoulder at me, the allegedly complicit viewer, I thought, "Aw, fuck. It's a violent movie that wants to blame me for liking violent movies."

Let me say unequivocally that I do like many violent movies, sometimes even cruelly violent ones. And so what? Unless I begin confusing brutality in the movies with brutality in the real world (and Haneke seems to suggest that I should), it seems to me that it's a private matter between me, Wes Craven, John Woo, and Sam Peckinpah. However, unlike his forebears in violent cinema, Haneke isn't investigating the expressive potential of violence. What separates Funny Games from its less outwardly intellectual genre counterparts is its obstinate refusal to give the audience what it wants. Funny Games denies its victims even a minimal triumph over their aggressors — an element that's crucial to the horror formula. It's an interesting bit of theory, but it comes at least 20 years too late to qualify as "insight." Truth be told, I think the Scream movies are a more valuable addition to the genre.

That's not to say that Haneke's not onto something interesting. I just wish it cohered better aesthetically. It's hard to become emotionally involved in a picture (and I think Funny Games demands some level of involvement) when you can feel the director forcing every minute of celluloid right down your throat. Funny Games should be an art picture with an exploitation sensibility. Instead, it's an exploitation picture that can't free itself of the rigors of its own pretensions. Held in the camera for an uncomfortable length, one static tableaux is reminiscent of Bergman, although it lacks Bergman's sense of the mystical. And even a Bergman film would collapse under the strain of the austerity that blanches every frame of Funny Games.

Eventually, it was all I could do to keep from laughing. This is a very talented cast, particularly Susanne Lothar in the physically demanding role of Anna, the wife. But toward the two-thirds mark, the characters have been beaten and broken in so many different ways that their struggles take on a desperate comic poignance, like something out of Buster Keaton or Chuck Jones. Haneke heaps it on so thick that for a moment I hoped he was making a black comedy.

Am I proving the director's point by failing to be appalled at the proceedings? In my own defense, I must note that this all reeks so baldly of a set-up that it's hard not to approach it clinically. We're given no real stake in the lives of the family, because the characters are drawn in the blandest possible terms. When we first see them, they're cruising through the country, boat in tow, playing banal name-that-composer" games with the Sony CD player. They're carefree, thinking only of boating and golfing. When the two young psychos come to visit, it's plain that they've got the plum roles, with amusing quirks and a frightening agenda: They're toying with an overprivileged but woefully underequipped family unit. If we take even a grim sort of pleasure in their sick shenanigans, Haneke has put us where he wants us.

Haneke's games are played on a smaller scale, as well. When Anna is forced to undress for her captors, the camera moves in deliberately for a close-up, denying us the presumed pleasure of seeing her naked. Later in the film, in an apparent sop to the audience's lecherous nature, we see her strip down to a completely sheer bra. Elsewhere, Haneke doles out pleasure and then takes it away, as in one key scene that's literally rewound and replayed to excise a moment when the tables are turned on the captors. Early on, he throws in a close-up of a kitchen knife that's been left behind, using a standard trick of the thriller to clue us in that the knife will be an important prop later in the film. When that moment comes, of course, Haneke makes it as anticlimactic as possible, further confounding our expectations.

So Haneke is as skillful a manipulator of an audience's sensibilities as the director of an average B-grade slasher film from the 1980s. So what? Michael Powell's proto-slasher flick, Peeping Tom, implicated its audience way back in 1960, and Haneke doesn't add anything new to the discussion. Later films, such as Last House and John Carpenter's Halloween, would give theorists plenty more to ponder, such as whether the audience's interest in a horror film is necessarily sadistic. (For the record, I nearly always identify with the "victim" when watching horror films, and I think most of the audience does, too — teenaged girls, especially, flock to movies like the Scream series or I Know What You Did Last Summer for three reasons: they like to see strong actresses in the lead roles, they enjoy watching the cute young guys cast opposite the girls, and finally, they want to be scared.)

Viewed as a horror film, Funny Games is little better than mediocre. There are a couple of honestly gruesome moments, and Arno Frisch is credibly maddening as Paul, the smug psychopath. A long stretch at the end of the second act is fairly suspenseful, and represents the film's closest approach to a conventional thriller. But Haneke's agenda precludes thrills. He's intent on denying the audience everything but the basest pleasures imaginable. In my case, suspense quickly gave way to impatience, as I waited out Haneke's grand experiment. The whole piece bears the patina of Art, negating any real horror or discomfort on the part of the audience.

As theory, it's just obvious. If it's really a metacinematic treatise on the audience's complicity in sadistic entertainment that you're looking for, well, let me refer you to the Belgian shocker Man Bites Dog, which covered this ground more rigorously a few years ago. (Beyond that, you may want to just read a book.) In a weird attempt to shore up his thesis, Haneke even sticks in a quick conversation at the end of the movie in which one character alleges that, if we see something happen in a movie, it's fundamentally the same as seeing it happen in real life. Now, Haneke might have a point where real exploitation pictures are concerned. It's pretty obvious that a movie like Last House on the Left put its performers through the wringer in catering to an exploitation audience's thirst for flesh and blood. But I don't think that's what he's getting at. Rather, he wants to make his audience uncomfortable with its presence at his own picture. The gambit doesn't work, however, since Funny Games is so obviously art-with-a-capital-A. Despite some effective moments, Funny Games remains an ersatz nightmare, a pedant's lecture on sadism, and an intellectual's idea of a shock tactic.

Does it sound like I'm criticizing Funny Games for not being cruel enough? My complaint is that it underestimates the horror film, the slasher picture, and even the upper-tier exploitation movie. Relentlessly reductive, Haneke stacks the deck by stripping the genre of its redemptive qualities and transgressive power and then chastizes the audience for its presence at such an untoward spectacle. The real cinema of sadism is more horrifying — and more rewarding — than this film imagines.

Friday, January 26, 2007

by Elena Filipovic, from here

Cujus abditis adhuc vitiis congruebat. - Tacitus

Claiming not to be a writer, philosopher, or even an artist, "but first, foremost, and always, a monomaniac," Pierre Klossowski (1905-2001) has long remained a cultish figure, better known for his literary works yet ultimately unclassifiable. He was born in Paris to parents of Polish origin who raised Pierre and his younger brother Balthazar (who would become famous as a painter under the name Balthus) between France, Germany, and Switzerland immersed in fine art, music, and literature. Their father was a painter, art critic, and specialist in 19th century art history and their mother, a painter and student of Bonnard. The brothers were weaned on culture, nourished by a milieu of artists, poets, and intellectuals, and encouraged by their mother's lover, Rainer Maria Rilke, to pursue the arts - Balthus with a paintbrush and Pierre, at first, with words. Pierre became an accomplished translator (of Hölderlin, Kafka, Nietzsche, and Virgil, among others) and an erudite philosophical essayist, finding intellectual exchange in the Surrealists and, more lastingly, in its dissident faction grouped around Georges Bataille. A bout of religious fervor struck him during the years of the Occupation and Klossowski turned to theology, spending several years living as a seminarian with the Dominican order. Having decided against monastic life, he met and married Denise Marie Roberte Morin-Sinclair in 1947. She and Catholicism would everywhere haunt his future work. A book on the Marquis de Sade read through a theological lens, Sade, mon prochain (1947), and his first novel, La Vocation suspendue (1950), no less imbued with religious questioning, followed. For Roberte ce soir (1954), his second novel and the first of an erotic trilogy introducing the character "Roberte" inspired by his wife, Klossowski's publisher advised him to produce a deluxe, illustrated edition to be sold by subscription so as to avoid French censorship. Balthus proposed a set of illustrations, but they didn't satisfy his brother who ended up drawing his own clumsy lead pencil sketches to accompany the publication. Thus began a slippery imbrication of textual and visual narrativites.

Klossowski continued to draw and, at the encouragement of friends such as Bataille, Alberto Giacometti, and André Masson, he showed his drawings in a small private exhibition in 1956. It was more than a decade later, in 1967, that he first exhibited his works publicly. By 1972, the critically acclaimed author had given up novel writing altogether in order to devote himself fully to drawing. This "absolute rupture with writing," as he called it, also accompanied a move from the monochromatic lines of lead pencil to the soft hues of color pencils. Although critics in the period saw them all simply as "bad" drawings, Klossowski's graphic work insisted on a formal vocabulary that refused both belle forme and the shock of the new of avant-gardism. Some 300 works made over 40 years trace Klossowski's engagement with his peculiar brand of drawing - eschewing the technical training, ease of coverage, saturation of colors, and more permanent support of painting. Often working on sheets of paper spread directly on the floor, he insisted on making his pencil drawings nearly life-size - recognizing, of course, that this was a particularly demanding and time-consuming combination. But their size, and thus their engagement with the viewer's own body, was crucial, as was the choice of pencil. Klossowski appreciated the precision of his unskilled lead or color line - the shades didn't spread or soften and there was nothing polished about the result. One can see every shaky, untutored, and incomplete stroke the artist made on any given page. A series of colored lines in the form of "Roberte" would not convince anyone they were actually looking through a window onto a world with a woman in it. And this is exactly how Klossowski wanted it.

Let it be said, Klossowski's drawings are odd. Look at La cuisine de Gilles (1976) in which an effeminate boy is fondled and bitten fireside by the 15th century nobleman Gilles de Rais, infamous for his murderous sexual debauchery, necrophilia, and unshakable Catholic faith. Look at Roberte, naked and sleeping with a fully dressed, wand-waving lilliputian Gulliver on her knees in Roberte et Gulliver I, (1980). Look, in Tarquin et Lucrèce (1976), at the flattened space from which Tarquin is supposed to be emerging, the way Lucrecia's leg slides off her improbable bed, or the way, despite the struggle, one of her ankles remains delicately covered. There is something both absurd and strangely disquieting about Klossowski's large-scale erotic renderings, a fact compounded by their lack of perspective, technical sophistication, and finish. In his graphic oeuvre, one will not find a single landscape or still life, no pretty Arcadias and no studies of baskets of fruit. Klossowski made a specialty of overtly theatrical tableaux vivants. In them, figures appear off balance and out of proportion, frequently supplicating, reprimanding or seducing (and gesturing with their other hand against this at that same time), and often in some state of undress. They look as if they are suspended - in action and in another time.

In their subject matter as in their style, the drawings relentlessly evoke long outmoded places, moments, and forms: Classical Greece, Late Antiquity, Early Renaissance frescos, Sadean decadence, sinuous Baroque poses, Catholic ritual. Repetition and citation are central to these artifacts. As if announcing this, the atypical drawing Le grand renfermement II (1988) juxtaposes a pantheon of disparate temporalities, religious and secular figures, and drawn citations from the author-artist's real and fictive worlds, including a rendition of Leonardo da Vinci's Renaissance panel painting of Virgin and child with St. Anne (1510) in one corner and a reduced-size version of one of Klossowski's early portraits, Georges Bataille (1955) in another. Klossowski drew a number of portraits of friends (Bataille, Roland Barthes, Louis René des Forêts, André Gide…) but, for the most part, his drawings are stages in which his literary characters appear. He draws and redraws Roberte above all, but also, Diane, Lucrecia, or Judith (who all appear with the unmistakable likeness of Klossowski's wife, model, and muse, Denise), as well as the young Ogier, Tarquin, Gulliver, the Marquis de Sade, and any number of clerics, saints, and dwarfs. Androgynous women and effeminate boys inhabit the world of mystical visions, moral instruction, theological initiation, and carnal corruption that Klossowski speaks of in his novels. His drawings thus inescapably recall his literary works, which are themselves filled with descriptions of paintings, photographs, and projected images. Depicting characters from Klossowski's final novel Le Baphomet (1965), the drawings La Tour de la Méditation (1976) and Ogier morigénant le frère Damiens (1990) portray scenes of erotic ambiguity in contexts lined with the symbols of religious order (Christian crosses, ecclesiastical dress, Latin church text…). This mix of nudity, sexual innuendo, and theological references - so characteristic of his oeuvre - should make the images shocking, difficult, or scandalous. But in their strangeness and curious instability, they manage to be endlessly puzzling and captivating instead. Untranslatable from or into words, Klossowski's drawings are never mere illustrations of his novels and, invariably, they resist and retell the written narratives from which they seem to emerge.

Admired and defended by some of France's most renowned thinkers, including Maurice Blanchot, Gilles Deleuze, and Michel Foucault, Klossowski's graphic works have nevertheless remained largely misunderstood, ignored, and invisible in the history of art. Dense in their references and little like anything else of their era, they are hard to categorize. Yet, it is precisely in their troubling of categories and coherence that the full significance of this oeuvre emerges. Repetitious, anachronistic, and internally contradictory in their forms, the drawings also combine fragile, often pieced-together sheets of paper with large-scale forms, keeping in tension precarity and grandeur. They combine an awkward, seemingly amateurish style with extremely cultivated references, bringing together hesitancy and seriousness. They combine fantastic, pornographic scenes with an utterly un-arousing stiffness, orchestrating a sense of seduction and absurdity, animation and immobility. Only the sum of such incongruous combinations could provide a fitting stage for the presentation of Klossowski's monomaniacal preoccupation (and one of modern Western society's most fundamental oppositions): the meeting of the erotic and the sacred.


La cuisine de Gilles:

Ogier morigenant le frere Damiens:
by Jeff Vail, from jeffvail.net

Philosopher Philip Bobbitt, in his seminal work “The Shield of Achilles”, proposed that the 20th century was defined by the ideological conflicts between socialism, fascism and capitalism. These competing ideologies purported to offer the hierarchal control structure most suited to meeting the needs of the people. In the course of this conflict, asymmetric warfare—the use of non-hierarchal structures to successfully confront hierarchy—was refined. The conflicts of the 20th century forged current theories of rhizome—the name for non-hierarchal, asymmetrical and networked patterns of organization. Empowered by a revolution in communication technology and the spread of democratic freedoms, the conflicts of the 21st century will be defined not by past political ideologies, but by a much more fundamental, structural conflict: hierarchy vs. rhizome.

Rhizome structures, swarming media and asymmetric politics will not be a means to support or improve a centralized, hierarchal democracy—they will be an alternative to it.

Many groups that seek change have yet to identify hierarchy itself as the root cause of their problem or cause, but are already beginning to realize that rhizome is the solution. Movements as diverse as American Progressives, al-Qa’ida and the “New Bolivarians” are already consciously adopting some rhizome elements to their actions. As theoretical knowledge and systems understanding improves, this conflict will become more clearly defined along the lines of hierarchy vs. rhizome.

Rhizome has a long history of application within military theory, but its use as a non-violent political tool is still rapidly developing. Rhizome tactics such as swarming have been used successfully at the 1999 WTO protests in Seattle, and less successfully by protesters at the Republican National Convention in 2004. A methodology of decentralized “leaderless resistance” first formalized by white supremacists is now being used with some success by the Earth Liberation Front and the Animal Liberation Front. Rhizome tactics have found notable success in economics as well, with rural communities using localization policies, increasing distributed power generation, the spread of farmers’ markets and an increased focus on “slow food” and regional cuisine.

But despite recent successes, the value of rhizome structure and strategies continues to be constrained by a failure to frame conflicts in clear “hierarchy vs. rhizome” terms. Political activists seeking to use rhizome concepts to improve a hierarchal structure such as America’s hierarchal democracy will ultimately fail. Similarly, the protestors at the Republican National Convention were effectively controlled by police because they failed to identify their purpose—and frame their tactics—in terms of rhizome pattern and structure. The OODA Loop suggests that the victorious party is the one that can more quickly Observe lessons learned from past conflict, Orient themselves to identify their shortcomings in light of these lessons, Decide on a course of action to address identified shortcomings and then put those decisions into Action. The failure of the protestors at the RNC was largely their failure to evolve their doctrine as quickly as the NYPD was able to evolve theirs. It appears that neither side explicitly framed their efforts in terms of hierarchy vs. rhizome, but had the protesters done so they would have been better able to access the existing body of knowledge provided by the rhizome “Doctrine Network”, consisting itself of rhizomatic nodes such as http://www.globalguerrillas.com/ and http://www.jeffvail.net/ .

The Rhizome Toolkit: Blogs, Open Source Warfare & the Doctrine Network

Hierarchies exert command and control via a centralized, top-down process. This creates numerous layers that information must relay between, and results in an information processing burden that significantly slows the ability of hierarchy to execute the OODA loop. The advantage of rhizome—aside from preventing the abuse of power endemic to hierarchy—is its superior information processing capability. One rhizome example, the network of political blogs, demonstrated its information processing ability during the 2004 election season, regularly trumping hierarchal media establishments on breaking stories.

As global conflict is increasingly framed within the context of hierarchy vs. rhizome, the doctrine and tactics of rhizome action is beginning to coalesce into an effective system. This system, founded upon the information processing capability of rhizome, consists of infrastructure, distributed decision making and general doctrine.

The general doctrine of rhizome action, whether peaceful or violent, is based on the model of Open Source Warfare (Global Guerrillas link). Without the centralized command structure of hierarchy, actions and tactics are proposed by the network and adopted by constituent nodes via a process similar in many ways to a clinical trial. Some node devises a tactic or selects a target and makes this theory publicly available—Open Source. One or several trials of this theory are conducted, and the tactic is then adopted and improved upon by the network as a whole based on its success. This may seem like a contrived and overly mechanistic system, but in fact it functions very much like biological evolution.

Rhizome uses distributed decision making—the Doctrine Network—to evaluate, improve and adopt Open Source Warfare concepts. This distributed decision making is facilitated by some type of non-hierarchal communications infrastructure. Two examples will help to illustrate this process:

Al Qa’ida and the Rhizome Toolkit

Osama bin Laden and other “central” al-Qa’ida figures are increasingly removed from everyday operations and instead function as a node in the al-Qa’ida Doctrine Network. Bin Laden & crew propose targeting strategies, praise selected actions and generally contribute to the clinical trial of new strategies and procedures. They communicate with their network via largely Open Source methods—tapes sent to Arabic language satellite TV channels, jihadist websites, etc. Other groups such as that of Abu Musab al Zarqawi—in no way under hierarchal control from bin Laden—then take the Open Source Warfare outcomes from these clinical trials and put them into action. The train bombings in Madrid and subway bombings in London are an example of this process in action, as are the steadily improving tactics of insurgents in Iraq.

Progressive political bloggers in America, while markedly different in ideology from al-Qa’ida, function in a remarkably similar manner. The network of blogs serves as a Doctrine Network, function as a Clinical Trial for political criticism, and constitute an Open Source communication infrastructure all at the same time. One blogger writes a persuasive argument against President Bush, another improves or expands upon it and posts it to a heavily visited site, feedback and critiques further develop the argument until it is a fully sharpened weapon in the progressive’s Open Source Warfare arsenal.

While these examples illustrate that rhizome concepts are influencing political processes around the world, they largely fail to consciously recognize their rhizome system. Their true power—and the course of conflict in the 21st century—will be defined not simply when they realize that they must frame their struggle in terms of rhizome action, but when they realize that rhizome structure IS THEIR STRUGGLE. Widely disparate groups, from al-Qa’ida to American Progressives to ELF and South American indigenous peoples are ultimately struggling against hierarchy. Their individual movements have been grossly distorted and perverted by remnant ideologies, local and historical circumstances, but at their core they are in fact quite similar. If they are able to recognize their unity of purpose, or if they spawn a broader movement with such a unity of purpose, then this coherent rhizome pattern will spread and effectively check and reduce hierarchy. If they remain fragmented and separate they will still be capable of harassing the dominance of hierarchy, but will effect little real change.

While it may seem improbable for Progressives and al-Qa’ida to decide to join forces for the common good, it is certainly within the realm of possibility to expect the various factions within the broad Progressive movement to realize that their pet causes are all derived from a basic conflict with hierarchy, and that the solution lies in consciously adopting a rhizome structure. A conscious focus on rhizome organization will lead to improved functioning of the Doctrine Network, Clinical Trials and communications infrastructure. Individual bloggers will realize that their minor improvement or addition to another’s idea is critical to the functioning of the system. The divide between talk and action will diminish as a better understanding of the rhizome process will lead protestors and economic localizers to realize that they must blog, and bloggers to realize that they must protest and purchase wisely. The interconnectivity between anti-globalization, economic localization, human rights, freedoms, environmental concerns, and equal opportunity policies will become clear, and the combined power of each of these factions will, working together, be far greater than the sum of their parts. Perhaps most importantly, the logic of a unified effort will finally be able to convince the average person that they, too, have a self-interested stake in this struggle, and that they must act on the side of rhizome. A conscious and unified rhizome movement would be powerful indeed.

Wednesday, January 24, 2007

In the middle of a tangled forest lies the deep, dark pond. Its waters are cold, and very still. They are so clear that the pond appears black, so deep are its depths. On this pond's surface the sky is pristinely reflected, and sometimes through this obverse sky float clouds, which one might call thoughts were one so inclined. Each cloud has a unique shape, and the origin of each diversity is as foreign to the pond as its destination. Many days pass before a storm appears. The sky darkens; the clouds are many. Thunder peals. When the rain begins, the surface of the pond is disturbed, and the reflection disappears.

Saturday, January 20, 2007


de·fin·i·en·dum –noun, plural
1. something that is or is to be defined, esp. the term at the head of a dictionary entry.
2. Logic. an expression to be defined in terms of another expression previously defined.

de·fin·i·ens –noun, plural
1. something that defines, esp. the defining part of a dictionary entry.
2. Logic. an expression in terms of which another may be adequately defined.
by Tom Carter
17 April 2006, from here

Use this version to print | Send this link by email | Email the author

Søren Kierkegaard: A Biography, by Joachim Garff, translated by Bruce H. Kirmmse. 867 pages, Princeton University Press, $35

Søren Kierkegaard: A Biography, published in 2000 in Danish and translated into English this past year, is an important, historically rigorous, thorough, but in some ways limited biography. The author does not fail to provide a detailed exegesis of the Danish philosopher Søren Kierkegaard’s work in parallel with the narrative of his life, and he is also able to create an especially grim and compelling portrait of life in Copenhagen and Berlin during the first half of the nineteenth century. However, Garff does not present Kierkegaard’s philosophical work in the broader context of the crisis of bourgeois philosophy in the middle of the nineteenth century.

Kierkegaard, whose major works include Fear and Trembling, Either-Or, and From the Papers of One Still Living, remains a major figure in philosophy. He is one of the principal authors of some of the most prevalent philosophical positions in academia today, which include the rejection of reason, science and the Enlightenment, and, above all, a rejection of the unity of reason and reality, which is a rejection of the possibility of science. Kierkegaard saw no correlation between universal essence and individual existence—between the law-governed processes of the objective world and the perceptive and cognitive faculties of the individual. Moreover, he denied that such a correlation was actually achievable.

While Kierkegaard is by no means the only major figure of this philosophical tendency, which has since spawned existentialism, post-modernism, and various other trends, he is chronologically one of the first. Kierkegaard argued that all systems—including Hegel’s Logic and scientific systems in general—“omit the individual,” and therefore present an ultimately limited view of life, leaving out, in fact, the most basic features of human existence.

The acceptance of his works marked a major turning point in bourgeois philosophy—a turn away from the confidence that the application of science and reason to all facets of human life would lift the cultural and material level of every member of society, and a turn inward to subjectivism and cynicism. Since Kierkegaard, science and reason have officially been designated enemies of humanity—and have been blamed over the years for everything from misogyny to the Holocaust.

Today, one sees Kierkegaard everywhere. For instance, on February 28, 2006, the New York Times ran an op-ed piece by William Broadway entitled “The Oracle Suggests a Truce Between Science and Religion.” Broadway wrote, “The truth is that science and spirituality, rather than addressing similar ground, speak to very different realms of human experience and, at least in theory, have the potential to coexist in peace, complementing rather than constantly battling each other.”

According to Broadway, science, at best, can only describe the motion of matter, while other “moral” and “ethical” matters must be left to religion. The idea that the domain of science and reason is unlimited is, he wrote, “more hope than fact...and can exhibit a kind of arrogance.”

Broadway’s rancor at science’s “intrusion” into “spiritual” affairs could have been lifted directly from the pages of Kierkegaard.

The influence of Kierkegaard’s thought—his subjectivism, irrationalism, and mysticism—on official thought today is vast. Marxists are obliged to carefully and critically study the philosophy of Kierkegaard and his co-thinkers.

Kierkegaard’s life

Garff has obviously spared no effort in providing as complete a picture of Kierkegaard’s life as possible. The reader is taken through the Kierkegaard family’s financial records, the Church’s documentation of the family’s confession visits, diaries of virtually every person who had contact with Kierkegaard, and Kierkegaard’s own multitudinous and often self-contradictory journals, which are often more fiction than fact.

The book is an excellent record of Kierkegaard’s life, and virtually no details escape the author’s critical eye. In one poignant paragraph, Garff quotes a sentence from Kierkegaard’s journal: “After my death,” Kierkegaard wrote, “this is my consolation: No one will be able to find in my papers one single bit of information about what has really filled my life” (Garff, p. 101). On the contrary, Garff replies to Kierkegaard, “people frequently overlook the fact that mystification, mummery, and fiction are constitutive features in Kierkegaard’s production of himself, and that this is precisely why these things help reveal the ‘real’ Kierkegaard” (p. 101).

What emerges from the biography is a sense of a powerful, perceptive, and articulate genius, trapped and isolated from society at large and tortured incessantly by his own conscience. Kierkegaard’s writings on his own life as a writer are often eerie, sad, and darkly beautiful.

“What is a poet?” Kierkegaard writes. “His lot is like that of the unfortunates who were put in Philaris’ bull [a hollow copper sculpture outfitted with flutes] and gradually tortured over a slow fire: Their screams could not reach the tyrant’s ears to terrify him; to him they sounded like sweet music. And people crowd around the poet and say to him, ‘sing again soon,’ which means, ‘may new sufferings torment your soul’ ” (p. 431).

Kierkegaard’s life was indeed full of sufferings and torment. He was born in 1813, and by 1838 five of his six siblings had died as a result of disease or childbirth and he had visited the graves of both of his parents. In Kierkegaard’s dramatic memory, his father was a towering, stoic figure of power, terror, and judgment who haunted the younger Kierkegaard for years after his death.

When he was 28 years old, because of some personal affliction (possibly venereal disease) Kierkegaard forced himself to spurn the affections of the most popular woman in Copenhagen—18-year-old Regine Olsen, whom he dearly loved—without explaining to her why. Initially crushed by the rejection, Regine later married the successful philosopher Fritz Schlegel.

Kierkegaard never recovered, and his love for Regine festered into a disturbing lifelong obsession. Hundreds of pages of his journals are filled with fantasies about her, fragments of imagined conversations, cryptic book dedications, and unsent letters.

Kierkegaard’s writings are extraordinarily subtle and complicated. He published his essays under various pseudonyms, each with a somewhat different philosophical outlook, and even arranged for his pseudonyms to engage in public correspondence with one another in the newspapers. In his journals, he takes up lengthy arguments against his own cornucopia of alter egos from various points of view.

In his writing and actions, Kierkegaard expresses a profound disgust with all of official society, its meaningless rituals, its pomp and ceremony, and all its pretensions at cultivation. He sees that religion, which he considers a thoroughly private matter, has become merely an instrument of the state. Human society around him is at once absurd and brutal.

Kierkegaard’s philosophy, however, emerging out of these tortured circumstances, assumes a thoroughly cynical, elitist, and misanthropic character.

Kierkegaard’s Attack on Reason

Kierkegaard’s reaction to the decay and moral bankruptcy of official “cultivated” society was to attack the very foundation of the Enlightenment that had produced it—reason. Reason was the cornerstone of it all—of science, of knowledge, of medicine, of the Church, and of philosophy.

Real knowledge or understanding, Kierkegaard argued, was acquired individually, emotionally and immediately through lucid experiences. Kierkegaard strongly believed, first of all, that the whole idea of Christendom was therefore mistaken. God has no relationship to human society in the abstract, Kierkegaard thought. God has relationships only with individuals, and the individual experience of God—one of terror and awe—is of an intimately personal and mystical nature.

Kierkegaard insisted to his brother, who defended simultaneously reason and the Church, in “faith’s independence from compelling proofs” (p. 637). This theme reverberates throughout Kierkegaard’s work, and is probably more popular today than it was in Kierkegaard’s time. The phrase “leap of faith” has become so commonplace that it has been largely forgotten that Kierkegaard was its author. The controversy that originally surrounded this outlook in religious circles has also been forgotten.

Kierkegaard regarded all of the Enlightenment conceptions of scientific objectivity as total nonsense. “Absolutely no benefit can be derived from involving oneself with the natural sciences,” Kierkegaard wrote. “One stands there defenseless, with no control over anything. The researcher immediately begins to distract one with his details: Now one is to go to Australia, now to the moon; now into an underground cave; now, by Satan, up the arse—to look for an intestinal worm; now the telescope must be used; now the microscope: Who in the devil can endure it?” (p. 468)

Kierkegaard viewed science, insofar as it altered a person’s perception of his or her surroundings, as a “corrupting” influence.

The most important feature of Kierkegaard’s philosophy is that each of his categories—irony, repetition, mercy, suffering, anxiety, etc.—are derived from immediate, subjective, emotional experience. Rather than study human thought by observing its relation to the objective course of human history, as Hegel did, Kierkegaard proposes that human thought be studied by individual introspection and reflection on “experience.” In this way, Kierkegaard echoes some of the epistemology of Hegel’s precursor, Kant—anticipating the philosophical movement now referred to as the “Return to Kant.”

Kierkegaard rejected adamantly Hegel’s view, shared by Marx and Engels, that the development of human thought is objective and universal, and that history can be studied scientifically.

“In the end,” Kierkegaard wrote, “all of world history becomes nonsense. Action is completely abolished... The castle in Paris is stormed by an indeterminate number of people, who do not know what they want, with no definite idea. Then the king flees. And then there is a republic. Nonsense” (p. 495).

Whereas Hegel’s philosophical categories were profoundly analytical, joined together by objective historical and logical necessity, Kierkegaard’s categories are not systematically interrelated in any objective sense. They are related only insofar as they interact with one another in the individual psyche.

Kierkegaard also categorically rejected the idea that thought could in any way be shaped by objective reality, because in his view there was nothing outside of consciousness—there was only existence. “One sticks a finger in the ground in order to tell by the smell what country one is in,” Kierkegaard wrote. “I stick my finger into the world, it smells of nothing” (p. 240).

“People generally believe,” wrote Kierkegaard, “that the tendency of a person’s thoughts is determined by external circumstances.... But this is not so. That which determines the tendency of a person’s thoughts is essentially to be found within the person’s own self” (p. 297).

This led him to certain nasty conclusions about mental illness. Depression, or “melancholia,” Kierkegaard wrote, is purely the fault of the afflicted person, who always has “an equal or perhaps greater possibility of the opposite state.” The real problem is that the depressed person lacks “faith,” and fails “to expect the joyous, the happy, the good” (p. 297). There is a degree of self-loathing here, since Kierkegaard himself suffered from depression.

Biographically, Kierkegaard’s mistrust of science and medicine came to the fore when he visited his doctor with unrecorded complaints in 1849. His doctor surmised that many of Kierkegaard’s day-to-day ailments resulted from his hunched back and poor habits, and told Kierkegaard that he “probably drinks too much coffee and walks too little” (p. 435). Kierkegaard himself had an entirely different take on his visit with his doctor.

“I have therefore spoken with my physician,” wrote Kierkegaard in his journal, “about whether he believed this misrelation in my constitution, between the physical and psychical, could be overcome so that I could realize the universal. This he doubted. I asked him whether he believed that the spirit was capable of refashioning or reshaping such a fundamental misrelation by force or will. This he doubted. He would not even advise me to bring the whole of my willpower (of which he has no notion) to bear upon it...” (p. 436).

Kierkegaard believed, as did many people in the medieval period, that sickness was the result of a “misrelation” between the soul and body, and that a person could be cured by summoning the willpower to correct it. “Psychosomatic misrelations,” he insisted, cannot be treated with “powders and pills” or by “pharmacists and doctors” (p. 435). Suffering, Kierkegaard thought, can be cured only by “the God of patience,” who “is truly the One who can absolutely and unconditionally persist in caring for a person” and restoring him or her to health.

Kierkegaard found the entire practice of medicine—one of the great conquests of human civilization—to be nothing more than a farce. “And what does the physician really have to say?” Kierkegaard asks himself, “Nothing.”

One limitation of Garff’s work is its failure to fully explain the links between Kierkegaard’s turn to subjective idealism in the realm of theoretical philosophy and his political philosophy, which is at some basic level apparent in his views on medicine. In fact, Kierkegaard’s theoretical and political philosophies are so thoroughly intertwined that it is truly impossible to disentangle them from each other.

By way of example, in the later stages of his life, Kierkegaard decided that the only correct moral response to the current human condition was religious martyrdom. Here, his mystical attitude toward theoretical questions crossed over into practical philosophy, ethics, and politics.

Kierkegaardian martyrdom takes the form not of death by crucifixion or stoning, but of total self-imposed isolation from society at large. One cannot marry, one must give up friends, family, and country, and one must adopt an attitude of total indifference and contempt for the rest of society. True, a martyr may have mercy for other individuals, but once one has the genuine attitude of mercy in one’s mind, the deed is done—no action is required. The thought of mercy is a good in and of itself. One must be, in essence, Kierkegaard himself!

Kierkegaard’s Political Philosophy

Politically, Kierkegaard, was an extraordinarily conservative defender of the aristocracy. A close political ally and acquaintance of the king of Denmark, Kierkegaard expressed a mixture of fear and disdain toward the emerging socialist and democratic movements in Europe. His first published essay was an attack on the women’s suffrage movement.

When in 1848 thousands demonstrated in the streets of Copenhagen to demand labor reforms, constitutional government and equal rights for women, Kierkegaard assured his readers, “Every movement and change that takes place with the help of 100,000 or 10,000 or 1,000 noisy, grumbling, rumbling, and yodeling people...is eo ipso untruth, a fake, a retrogression. For God is present here only in a very confused fashion or perhaps not at all, perhaps it is rather the Devil.... A mediocre ruler is a much better constitution than this abstraction, 100,000 rumbling nonhumans” (p. 494).

By and large, Kierkegaard, a misogynist himself, regarded the masses, or he called them pejoratively, “the multitude,” as the inferior “woman” in the struggle between the classes (p. 483). With equal measures of arrogance and fearfulness, Kierkegaard regarded the broad majority of ordinary people as “the most dangerous of all powers and the most insignificant” (p. 488).

When, in Holstein, revolutionaries launched a rebellion, Kierkegaard advised that the government “needs a war in order to stay in power, it needs all possible agitation of nationalistic sentiments” (p. 494).

Kierkegaard argued that democracy, not monarchy, is “the most tyrannical form of government,” and that of all forms of government, the government by a single individual is best: “Is it tyranny when one person wants to rule leaving the rest of us others out? No, but it is tyranny when all want to rule” (p. 487).

“A people’s government,” wrote Kierkegaard, “is the true image of Hell” (p. 487). Kierkegaard was unabashedly an apologist and supporter of the monarch, and when democratic revolution swept the country in 1849, Kierkegaard hid in his apartment and hoped it would all blow over.

Kierkegaard absolutely hated the idea of workers thinking for themselves. He once thanked a physician for restoring his carpenter to health: “He is once more what he has had the honor of being for twenty-five years, a worker with life and spirit, a worker who, although he thinks while he is doing his work, does not make the mistake of wanting to make thinking into his work” (p. 540).

However, Kierkegaard does offer a solution to the problem of “leveling”—his pejorative term for democracy—and that solution is religion. “No age,” he wrote, “can halt the skepticism of leveling, nor can the present age.... It can only be halted if the individual, in the separateness of his individuality, acquires the fearlessness of religion” (p. 490). In the end, he suggested, “an apparently political movement [the democratic revolution of 1849] is at root a repressed need for religion” (p. 499).

Kierkegaard regarded the supreme monarch of Denmark, Christian VIII, as the parent and moral superior of every Danish man, woman, and child, and as such he regarded it as the king’s moral duty to lead the country out of crisis by moral example and teaching, even though he thought the masses were largely unworthy of the effort. “Upbringing,” Kierkegaard wrote, “upbringing is what the world needs. This is what I have always spoken of. This is what I said to Christian VIII. And this is what people regard as the most superfluous of things” (p. 495).

Kierkegaard even dedicated some time to attacking socialism, which had gained significant popularity in Denmark during his lifetime. In his attacks, he insisted that it was the right of any individual to “abstain” from human society altogether, and that all forms of socialism—including Christian communalism or “pietism”—force uniformity upon people and therefore restrict their freedom (p. 504).

The obvious irony is that Kierkegaard, who believed that he had nobly chosen for religious reasons to abstain from human society, was afforded that luxury of “abstention” by a small staff of cooks, maids, secretaries, and carpenters who saw to his estate and ran his errands, which he paid for out of his large inheritance.

If Kierkegaard had read even a few of the major works of socialism, including The Communist Manifesto (which, according at least to Garff, he did not), he might have recognized that he had merely accepted uncritically the aristocratic straw man of communism. After all, how can the democratic power of every person to influence all matters of public life, and the emancipation of the toiling masses from exploitation and poverty, possibly be construed as a restriction of personal freedom?

Whatever his intellectual posture as a defender of individual freedom, Kierkegaard defended the censorship of the press when it was invoked against his more liberal opponents (p. 62).

Kierkegaard’s political philosophy is pervaded by racism, misogyny and elitism. When articles that were critical of his books were published in the newspaper The Corsair, Kierkegaard wrote, “The Corsair is, of course, a Jewish rebellion against the Christians,” which had a constituency only among “Jew businessmen, shop clerks, prostitutes, schoolboys, butcher boys, et cetera” (p. 408).

Various scholars and defenders of Kierkegaard over the years have attempted to separate the vileness of his politics from the rest of his work. In the final analysis, this simply cannot be done, for on what basis can one reject elitism and chauvinism if one has dispensed with reason itself? Without a rigorous, scientific understanding of the world situation, and of the multitude of economic, political, and social processes involved, humanity can make no progress towards social equality and democracy, and there will be no end of chauvinism and backwardness.

Kierkegaard’s slide into confusion and reaction was opened up by his indifference to reason, and was a necessary product of it. Without science and reason, and left only with subjectively derived impressions and emotions, Kierkegaard did not have the means to rise above the backward social milieu into which he was born.

To those who suggest that we should overlook Kierkegaard’s racism, elitism, and so on because to do otherwise would be to impose modern standards on Kierkegaard, we simply point to the writings of his antithesis, Karl Marx, whose major works were completed in the same period.

Kierkegaard’s place in the history of philosophy

Practitioners of philosophy at the beginning of the nineteenth century faced a serious challenge—how were the internal contradictions of Hegelianism to be resolved? Hegel was the greatest philosophical figure of the Enlightenment, but he was also in many ways the last. He stood with his feet on two irreconcilable shores.

On the one hand, he affirmed that all processes in the universe, including human history, were law-governed, and therefore can be studied scientifically. On the other hand, religion and spirit played the decisive role in his philosophical system.

After Hegel, philosophy resolved itself into two camps, each critical of one half of Hegel.

The philosophers of the first camp maintained the Enlightenment ideal that the application of reason and science to mankind’s objective surroundings, history, and society would facilitate the betterment of human civilization, and they believed that ensuing stages of human civilization would provide the means for each human person to achieve his or her fullest productive, cultural and spiritual potential.

However, they rejected the spiritual component of Hegel’s philosophy, exposing it as the veil behind which real social contradictions of the current period had been hidden. As materialists, they also rejected the idea that spirit or God was the cause and central feature of all human development.

Instead, they asserted, human history was law-governed, but it was the constant revolutionizing of humanity’s own social-productive capacity that made possible each intellectual stage in humanity’s evolution. It was scientific examination of the development of these productive forces that would thereby illuminate the way forward. Central figures in this camp included Karl Marx and Friedrich Engels.

The second great camp in philosophy emerged around the thinkers Arthur Schopenhauer, Søren Kierkegaard and later Friedrich Nietzsche, to whom most major philosophical trends in academia today can trace their lineage. These thinkers took up the inverse critique of Hegel: they rejected the entire project of the Enlightenment—the idea that science and reason could make possible the improvement of human society.

Instead, they affirmed the spiritual element of Hegel’s philosophy—they turned inwardly to subjectivism, individualism, mysticism and religion as a basis for the satisfaction of the single individual. Thoroughly pessimistic about the possibilities for the flourishing of human civilization expressed in the Enlightenment, these men developed a terribly cynical and indifferent attitude toward their fellow humans, towards science, and towards socialism.

It was Lenin who aptly observed that the two camps into which philosophy resolved after Hegel were not only philosophical camps, but ideological and political camps as well—that the two opposing theoretical perspectives reflected the ongoing war between two opposed classes. The rise to prominence of the second camp coincided historically with the rise of the bourgeoisie as a class, as the new ruling class found that the politics that flowed from the philosophical methods of the second camp were well suited to their interests.

It is no accident that Kierkegaard’s philosophy became, through Martin Heidegger, the philosophy of Nazism. (See The Case of Martin Heidegger, Philosopher and Nazi Part 1, Part 2, Part 3.) By the end of the twentieth century, Kierkegaard’s elitism, defense of social inequality, anticommunism, mysticism, and contempt for science and reason had seeped into almost every channel of official thought around the globe.

There is no doubt that Kierkegaard was a man in possession of a sensitive and powerful mind, and that he had a profound, though subjective, sense of the terror of bourgeois society. His life was indeed tragic, and it is easy to see how his story strikes a chord with many today who are likewise disgusted by the circumstances of modern life.

However, Kierkegaard’s thinking, as it emerged in the arena of philosophy, took on a truly reactionary and backward form. For a better understanding of the life and philosophy of this major philosophical figure, Garff’s biography, despite its limitations, is a good place to start.