Thursday, 1 November, 2007
Friday, 2 November, 2007
Saturday, 3 November, 2007
Sunday, 4 November, 2007
Discussions of code are commonplace in our culture, and an important metaphor for how we interpret and evaluate a more-than-human world. While some codes transmit information (e.g. machine code, genetic code), others create meaning—reciprocally constituting and communicating the interpretive traditions, presuppositions and worldviews that frame our outlook on the world. Such coding is accomplished (at least in part) through the discourses that most influence our ideas, actions and social relations with people and other animals. It is this latter use of discourse to code (e.g. generate, enforce, contest and revision) the meaning and ethics of human-animal relations that is the focus of these three papers.
Wolves are one of the most beloved and hated animals the world over. In the United States, one of the most controversial questions in environmental policy is whether, where and how to coexist with wolves. Routinely portrayed as a science-driven matter of natural resource management, the ‘wolf wars’ are a highly politicized conflict over the moral value of animals and nature, the efforts of local and national elites jockeying for power, and a cultural conflict over what it means to live sustainably. Because discourse is fluid and interpenetrating, there can be no singular or final classification of how wolves are coded. We can, however, identify discourses from ethics, science, politics and culture that are particularly important in shaping our individual and collective interpretations of wolves. From these discourses come ideas of wolves as biological machines, functional units of ecosystems, avatars of human virtue and vice, to name a few. The interplay of these discourses has a substantial impact on wolf recovery in regions like New England. The interpretation of discourse as a means of unravelling our coding of wolves also carries important lessons about interdisciplinarity in fields like human-animal studies.
What we call novels that foreground nonhuman animal subjects as talking characters or narrators determines who reads and discusses them, how they are marketed, and whether or not they are considered worthy of attention by scholars and historians of literature. How are the genres to which such novels are assigned regarded? Are such novels categorized as kiddy lit, young adult fare, or adult novels? An increasing number of college English professors now structure their courses around the subject/theme of animals. But how are these courses being categorized? Do they get called Animal Studies courses, Human-Animal Studies courses, or Animals in Literature courses, and what does each naming suggest about how they fit into a larger curricular scheme? While the focus of all of the courses may in fact be animals, each individual course title speaks volumes on the type of course it actually is. Perhaps, instead of accusing novels like Watership Down or Giraffe of anthropomorphism, we should, after recognizing that authors must see other animals through the mirror of self, see such labeling as evidence of the anthropocentrism that characterizes the Western Culture Story and use such novels as Daniel Quinn’s Ishmael uses Old Testament stories to encourage readers to see beyond the human to the animal self.
This paper uncovers both the suppression and expression of animal voices in contemporary graphic novels (comic books). Animals have been treated as puppets in many artistic fields, mostly used as a way to mirror and comment on human issues. Rarely are nonhumans given the freedom to comment on their own status in creative venues. Contemporary writers and artists have begun to resist the constraint of the traditional six-panel super-hero-themed comic book. This provides room for the dynamic exploration of a medium that is no longer limited to a child-like vision of the world. Comics now broach the full range of adult topics, including war, sex, love, poverty, racism, sexism, and more. As a result, animals, too, have become more three-dimensional. While nonhumans continue to be forced into the role as the mouthpiece of humans, they are also occasionally afforded communication in their own right as sentient, sapient beings. In some cases, authors even attempt to explore the minds of the animals they depict, placing their characters in a contemporary context in order to comment on the state of animals in our world. Still, there remains a fuzzy line between authors who reflexively rely on speciesist manipulations of their animal-characters as pseudo-humans, and those who let the animals speak truthfully for themselves. By extracting the implicit meanings in their text and drawings, it is possible to glean the author’s cultural coding of animals as both mirrors for humans and as inherently valuable beings
How is video coded, historically, culturally, aesthetically? How is commentary about race, culture, class, gender expressed within the frames and between them? What is the liminal space of the edit?
Instigators of this interdisciplinary round table include a poet, a new media artist, and a video curating and performance collective. They will begin this roundtable discussion by looking at examples of video as texts. From video poetry, and video journals, to conversations between images and texts on YouTube, the dialogue will extend to all participants of this roundtable for further conversation.
The AstroDime Transit Authority is a Think-Tank and public service organization, and media art collective that considers issues of transportation, communication and world and intergalactic citizenship. It is specifically interested in issues of race, class, gender and culture with respect to how human transportation and communication systems are constructed. In addition, the ATA consults and advises in sustainable communication and transportation systems off and on this planet. Its research includes curated video shows, surveys and performances which reveal and explore these issues. Media artists and educators sam smiley and Bebe Beard will represent the AstroDime Transit Authority.
Danielle Georges is a writer and educator. Her teaching and writing interests include contemporary American poetry, Caribbean literature, post-colonial literature, translation, and historiography. She is the author of a book of poems, Maroon (Curbstone Press, 2001), and has had work appear in a number of literary journals and anthologies.
Cynthia Lawson Jaramillo is a new media artist, educator, and technologist. She teaches, works, performs, and publishes in the areas of electronic literature, interactive installation, time-based media, integrated learning, and design education. She most recently co-authored a chapter in the MIT Press book, New Media Poetics: Contexts, Technotexts, and Theories (June 2006). Lawson is currently Assistant Professor and Director of the Integrated Design Curriculum at Parsons The New School for Design in New York.
Popular representations of brains in cartographic art, diagrams, and advertisements camouflage and inflect neuroscientific knowledge. By recoding the trope of brain mapping, contemporary artists challenge locationalist models and neuroscientific practice. Brain diagrams rely on a finite set of rhetorical tools to encode brain diagrams with cultural curiosities about cognition. Images of chemically imbalanced brains work to create a discourse of depression that serves the pharmaceutical industry. By analyzing art, scientific images, and memoirs we interrogate the ways nature is seconded by culture, the brain comes to signify as cultural organ, and visual media construct the reception of scientific information.
A recent “Mind/Body” special issue of Time Magazine, offers a “User’s Guide” to the human brain. Included among articles about consciousness, deception, stress, and mental time-travel are scores of artist renderings that map the brain and mind onto the head. Reminiscent of ancient cartography, these images code the brain as (un)charted territory. As one Time author comments: “Modern scientists have done a far better job of things, dividing the brain into multiple, discrete regions with satisfyingly technical names . . . and mapping particular functions to particular sites. Here lives abstract thought; here lives creativity; here is emotion; here is speech. But what about here and here and here and here—all the countless places and ways the brain continues to baffle us? Here still be dragons.” In this paper, I argue that these representations are not as uncritical or locationalist as they appear. Many of these renderings are, instead, illustrative of Luria, Frankin and Stacey’s conception of “nature seconded.” They reference mapmaking in very specific ways that redress scientific coding and re-evaluate the realism of brain scans while co-opting and displacing the brain as anatomical organ. The newly coded brains in Time and other popular media remind us that, as Baudrillard notes, “the territory no longer precedes the map, nor survives it.” Through a rhetorical examination of art, scientific image and text, I examine the contemporary and overlapping territories of brain science and brain art.
In Alice Weaver Flaherty’s critical memoir, The Midnight Disease, the excessive and voluminous writing associated with manic hypergraphia is understood as the result of temporal lobe epilepsy. In Flaherty’s text, like most work connecting regional brain activity with particular reactions or understandings, there is reliance on brain diagrams. Such diagrams employ a very particular rhetoric and diagrammatics that has developed in ways that are particular to how the brain is understood and represented. Brain diagramatics consist in large part of cross-sections, cut-aways, the portrayal of separate regions, color-coding, transparency, opacity, and 3-dimensional modeling. This array of tropes in the rhetoric of brain representation belies scientific and popular knowledge about brains. Furthermore, in Flaherty’s work on hypergraphia and other crossover projects like it that go to neuroscience to explain the arts, brain diagrammatics significantly inflect how the brain is understood in relation to human expression. Cut-aways, for instance, have become integral to explaining the significance of the cerebral cortex, while the rhetoric of opacity has been used to portray brains as integral to larger human bodies and neuro/circulatory systems. In this paper, by cracking the code of brain diagrams, I will show the significance of each representational strategy in relation to specific claims about cognition and the arts.
The power of the tiny is a recurring spiritual motif played out in parabolic, personal, and phenomenal contexts in the Old and New Testaments.
Christ’s parables dignify the small and the humble with their paradoxical power. A mustard seed grows into a tree lodge for the fowls. The good seed in good soil yields a hundredfold. The original seeds of every species reappear in countless fresh shoots. A speck of leaven flavors the whole lump of flour. The little flock need fear nothing because the Father has given them the kingdom. Two little mites of a poor widow outweigh the offerings of all the opulent. A little child is the greatest in the kingdom of heaven. Great strength abides in the mouth of suckling infants. The symbolic keys given to Peter withstand the gates of hell.
The minute dust particles become Adam’s building blocks and of his endless generations. The childless patriarch Abraham has his progeny in numbers as great as the dust of the earth. Bethlehem-Ephrata, though “the least of the thousands of cities of Judah,” is the prophetic birth-site of the Messiah.
As a phenomenon, the power of the tiny can either be a natural law or a prophetic principle, which turn human contexts into lived parables. The miracles of Cana’s wine and of the loaves show their abounding potentially ceaseless. A little one shall become a thousand and a small one a strong nation, the little nation of Israel is told. More strikingly, Moses reminds his nation that it has become God’s “peculiar treasure” among all world nations because of their size—the smallest. “Who has despised small beginnings?,” asks Zechariah, whose little plumb line signals the leveling of the threatening mountain.
The contest of quantity is resolved in the power of the small.
Every man has a secret in him, many die without finding it and will never find it because they are dead, it no longer exists, nor do they. I am dead and risen again with the jeweled key of my last spiritual casket. It is up to me now to open it in the absence of any borrowed impression, and its mystery will emanate in a sky of great beauty. (Mallarmé. Letter dated July 16, 1866)
Mallarmé’s secret is a code. Its mystery ticks through eternity like an internal clock, not only calibrating but also directing. It is a dynamic speech act, and evolutionary imperative, transcendent and material at the same time, deeply implicated in events but beyond them. The code describes reality, it also makes it. It is not the animating principle, but it animates. It is the machine that connects desire and an outcome that is always contingent, never inevitable. If we can understand our code, we have performed the right political act.
This paper uses techniques of performance to trace the rhizomatic incursion of the code ‘Modernity’ across an alien landscape. It is embodied within Georgiana Molloy and is revealed in observation of her far flung, colonial odyssey. She arrives on the West coast of Australia in 1830, one of the first settlers of the new Swan River Colony. She is 24, pregnant and ready to colonize. In time, she plants out a flower garden with seeds she has brought with her on the ship. Under the encouragement of the distant and mysterious botanist, Dr Mangles, Georgiana collects native specimens, dries, presses and labels them and sends them back to Kew Gardens. As they are placed within the Linnean system and held within the confines of her own proto-Darwinian culture, their previous emplacement in an Indigenous environment is uprooted, deterritorialized and for the most part, discarded.
Inspired by the popular ethnographic displays of the 19th century, and their insatiable desire to colonize, classify and appropriate, I present the historical figure of Georgiana Molloy as an exhibit; a curiosity of natural science.
For centuries Swedish poetry has been dominated by references to nature, and nature’s processes, more so than many other European national literatures. One reason for this is an intertwined relationship between nature and culture we can trace back to the works of Linnaeus. In Swedish poetry cultured landscapes may be seen as even more “natural” than wild ones. This relation seems to inspire the poets with a new set of metaphors, and symbols, which are connected to Swedish national history and the rapid, but comparative late, urbanization of the Swedish society. The intermingling of nature and culture is an essential trait for the regional poetry of the south of Sweden (Scania), a region famous for its vast farmlands, and a certain, often oppositional attitude towards the government in Stockholm. My presentation deals with questions concerning modern poetry (in this context, that means poetry from the first half of the 20th century) and its relation to the “natural”, and the human habitat. Theoretically the paper is grounded in ecocriticism, and the works of Terry Gifford, Jonathan Bate, Cheryll Glotfelty, and Greg Garrard. I will thus stress the “ecological” perspective taken by the authors, a perspective giving man access to the necessary code of nature that will enable him a deeply rooted contact with the soil of his region. For this I will in this presentation use the term “dwelling” (derived from Heidegger), and the ideas put forth by Simon Schama in his study Landscape and Memory.
Appreciative attention to insects, worms, parasites, and various microscopic organisms has become surprisingly common among nature-oriented poets of the past fifty years. Focusing on A. R. Ammons and Theodore Roethke, with additional discussion of Gary Snyder and Pattiann Rogers, I will argue that the use of lowly animals as poetic subjects is correlated with the rise of the postwar environmental movement and the increasing prominence of ecology as a discrete scientific discipline. Whereas nineteenth-century poets usually depicted conventionally beautiful aspects of the natural world, many contemporary poets reject this traditional understanding of nature as a collection of relatively static large-scale creatures and landscapes. Instead, contemporary ecopoems employ small-scale organisms to represent the natural world’s flux and its ongoing interplay between order and disorder. In such poems, invertebrates and microbes represent the dynamism of natural processes that foster ecological cycles of growth and decay. The inclusion of minute creatures in poetry is based on a fundamentally scientific view of the world, relying on the close observation of nature and, in some cases, a familiarity with species accessible only through the microscope. However, the acceptance of science as a fundamental approach to nature has not resulted in an absolute rationalism. A major thrust of much contemporary ecopoetry has been to integrate scientific and spiritual perceptions of the natural world, and I will show that scientifically-informed attention to small-scale nature has been one way by which poets have reformulated old tropes of nature as a signifier of sacred truths.
If I am invited to participate in “CODE”, I will develop and deliver an oral and visual presentation entitled Survival Principal: the Art of Nurturing Nature which discusses the role of contemporary environmental art in distilling practices and gestures from other disciplines, including literature and science, in the formulation of new activist art strategies. The theme of the Twenty-First Annual Conference of the Society for Literature, Science and the Art, “CODE”, will be addressed in my paper through efforts to define and explicate historical Western codes and principles concerning humanity’s interactions with nature (for instance the traditional Judeo-Christian ethos of human dominion over all life forms) and the ways in which these precepts have been altered and reshaped by growing public awareness of grave threats to environmental health worldwide. Through my research and writing, I am interested in identifying and exploring individual artists as well as collaborative efforts involving artists, writers and scientists that connect and combine research on biological processes and cycles, environmental awareness and activism, and technological advancements, in projects that truly nurture nature, expressing exceptional artistic merit and effective public outreach. Drawing upon the ideas and thoughts of influential writers, critics and artists, I will explain and contextualize these projects within a framework of ideas about nature and “the wilderness” which have been evolving on the American continent since the arrival of the earliest Europeans.
We live in an alien nation, and—as Walter Benjamin tells us—we must let “no thought pass incognito, and keep your notebook as strictly as the authorities keep their register of aliens.” The study of UFOs—or saucerian culture as Gray Barker put it—addresses inadequacies in other models for explaining experience and culture. As Deborah Battaglia argues: extraterrestrial discourse “cannot be dismissed as pseudoscience before we know precisely what of social and material consequence to a heterogeneous life on Earth we are dismissing,” specifically: “what the extra in extraterrestrial is and what a view of globalization as planetization is doing for an to the creativity of social life” Saucerian discourse maps and registers networks of perceptions and experiences not otherwise evident. This panel consists of papers and multimedia presentations emerging from research in the Gray Barker archives, one of the largest resources for saucerian culture, and unique for Barker’s limit position as both a leading proponent of extraterrestrial discourse and a hoaxer/prankster at work in the same field.
Poor George Adamski! Originator of the contemporary alien abduction narrative, Adamski felt validated when he received mail about his work from the US State Department. Unfortunately for Adamski, the letter was a hoax written by Gray Barker on stolen letterhead. Barker was a crucial and controversial figure in the field of ufology. Was he a researcher and believer in the truth that was out there, or was he nothing more than a hoaxer? I address this question, situated the fringe of knowledge and evidence, in terms of Gray Barker’s writing practice. In the Adamski case, or in other paradigmatic encounters—such as the Philadelphia Experiment, the mysterious story of the teleportation of USS Eldrige from Philadelphia to Newport and back and the subsequent coverup—Barker was the central node in discourse networks—e.g. newsletters, correspondence, small press books, but also reports from the Office of Naval Research and elsewhere within the government—where saucerian and official modes of discourse collapse and communicate. Maurice Blanchot referred to the “great hoax” as both the mythifying and self-validating nature of discourse, on the one hand, and the limit of this “hyper-sense” of discourse in literature, on the other; that is, a limit written to/in the non-present and non-absent other. Barker’s writing or hoaxing is a literary practice in this way: a pseudo-engagement in the discourse of ufology; a joking put-on and classic American con; and a limit text that solicits and produces hopes for evidence beyond current discursivities, hopes latent in the interior communication of science and its others.
Gray Barker published the first account of the Men in Black in his 1956 book They Knew Too Much About Flying Saucers. The Men in Black are best known from the two movies that carried their name in which they humorously help humanity. Their initial portrayal, however, was far more menacing. Barker describes three men in black suits walking in and demanding that stories about the recently seen UFOs remain unspoken. The depiction of the Men in Black changes over time, from Gray Barker’s “three men in black suits” to Alfred Bender’s supernatural reinvigoration of the phenomenon and finally through the numerous permutations that emerged from this initial literature. Some accounts suggested the Men in Black were sinister government agents while others concluded they were aliens with paranormal abilities such as materializing out of the air. My presentation will trace the development of the Men in Black while exploring their connections with the ends of knowledge. “They knew too much” is a recurring phrase that highlights the Men in Black as a higher level of order: an unknown other that knows the self, an arbiter of knowledge. The correlation between the Men in Black and the advanced technology of UFOs also invites questions regarding their relationship to the aims and practice of science. My research will emerge from the web of books, possessions, and correspondence that constitutes the Gray Barker archive.
Gray Barker adroitly integrated a host of diverse texts into what constitutes an ultimate postmodern novel/anti-novel, the Gray Barker archive: a hodge podge of correspondences, newsletters, sci-fi stories, photographs, alien seeds, amateur metaphysical musings, folklore, etc., most of which have the alien Other as a central thematic. West Virginia, where he resided and which Barker dubbed “the mini Bermuda triangle,” was indeed a rich resource for Barker’s vivid fictive and myth-making imagination. West Virginia’s location at the margins of American cultural and economic life lent itself to a production of strange folklore texts: mysterious swamp gas light shows, ghost stories, monsters and alien abductions. One of the “texts” from which Barker drew is the Flatwoods Monster encounter of September 12, 1952 in Braxton County WV. In this paper I will look at the way the Flatwoods Monster emerged as a text both at the local level as folklore and at the national level as one of series of alien encounters during the Cold War. I’m particularly interested in the way Barker folded the Flatwoods monster myth into his extant archive and the way he helped to develop and define the myth. The Flatwoods Monster emerged as a strange hybrid between monster, alien, and rocket ship. What is most intriguing about the Flatwoods Monster is just how early, like other alien abduction texts, it prognosticated the posthumanist transformation ushered in by the Cold War. The Flatwoods Monster was a kind of cyborg Other developed as folklore before the formal text of the cyborg was produced in the early 1960s.
Gray Barker’s relationship to UFOs and UFOlogy is inherently problematic; he simultaneously collected (and to some extent believed)—and created paradigmantic objects representative of ‘the alien’—the photography and rephotography of these objects contributes to an apocalyptic strain in American culture. For this presentation, I will reconstruct Barker’s constructions—from aluminum and plaster; I will offer a phenomenology and deconstruction of these objects; and I will present the possibility of deeply alien spaces online in such venues as Lambda MOO and Second Life. I will argue against both Cyborg and prosthetic models, instead favoring the anatomical analysis of Vesalius and Gray’s Anatomy. The presentation is multi-media, and will utilize research tools from the Virtual Environments Laboratory at West Virginia University.
We propose to consider code systems as infrastructures in a quasi-material way. As such, they relate qualitatively different components into a heterotopical field, building the very milieu for animate beings to inhabit. In order to account for such a proposition, it seems necessary to introduce ways of differentiating dimensionality and self-referentiality within a structural understanding of structure. This panel will discuss tentative and different approaches.
Could it be possible, as the Baron of Muenchhausen recounts in one of his tales, to draw oneself out of a swamp by only pulling heavily enough on our own shock of hair? This allegorical tale refers to the issue of self-referentiality, and may well serve to illustrate also the state of philosophy in a Deleuzean, non-representational culture of thought to which the virtual is of crucial importance.
Virtualization as the possibility of determining the logical inertial systems that embed and ground any given entity has become characteristic for today. I will propose to conceive of codes as quasi-material infrastructures of logical inertial systems. As such they provide relative stability, fluid standardization and local common grounds. Viewed as infrastructures, codes are conceived as layered and complexly embedded embodiments of standards, both shaping and being shaped by communities of practice. Code systems thus provide medial milieus which can indeed be inhabited.
Vilém Flusser has developed a perspective on codification hinting at an affirmative theory of abstraction, beyond the totalitarian scope seemingly inherent to generalizations. In his book From Subject to Project, Flusser assigns a crucial role to self-referentiality for a theory of a media culture, in which we are just as much products of our own codifications as we are actors of abstractions.
The here proposed essay in designing conceptions for—or in conceiving design for—future living is of a tentative and hypothetical character. How could we furnish our territories, in the medial milieu of the binary code?
The pervasiveness and the heterogeneity of the today apparently ubiquitous Sprachspiel ‘Code’ encourages to ask for its structural internals. Beyond any buzzword hypothesis, code may be conceived as a quasi-material infrastructure actualized from the irreducible trinity of semiosis, modeling, and virtualization.
Deleuze emphasizes that the virtual is amongst the real, that there is no real without the virtual. The virtual in this sense gives rise to the advantageousness of the capability of anticipation and expectation. We distinguish three conceptual and qualitatively different layers as contexts of anticipation: facts, form and semiosis. Anticipation as an observational term can be further explicated as a modeling relation in a meta-mathematical sense (R. Rosen). In this way, codes provide the possibility for modeling and thus represent also particular models about the world in which they are used. The respective multi-layered infrastructure of codes may be conceived as a medium through which Peircean semiosis, i.e. sign-situations (E. Taborsky), take place.
Thereby we are well aware, of course, about codification itself being a fluidly fixed, individual and socially acknowledged habit, which indicates that there is an ortho-direction in the meta-relations of the Deleuzean differential. It is shown, how Gertrude Stein plays with the ortho-dimensionality and meta-differentials of her readers’ habits to encode the decoding, especially in the Tender Buttons and other non-representational writing. Arranging a rhythmical landscape of words used as mere pointers, disappointing expectations about facts and form, she provokes the virtualization of representational language as well as an emergent semiosis in situ.
Aaron Kunin, a poet in his thirties, has generated a number of “translations” of extended works—Maurice Maeterlinck’s Pelléas et Mélisande and Ezra Pound’s “Hugh Selwyn Mauberly,” to name two—into an extremely restricted vocabulary, 200 words or less, through a means he calls a “binary hand-alphabet,” in which his fingers translate each letter into a binary representation. The catch is that the “translation” derives as much from the compulsive practice of transcribing language (read passages, overheard conversation, etc.) in this manner, which he describes as appearing like fidgeting or piano playing, eventually becoming a kind of unconscious habit, one in which his hand appeared to spell out, of its own accord, phrases and sentences touched with a melancholic air, like “It won’t be easy and can’t be a pleasure.” A record of this “ambient language” generated in part through a form of automatic writing became the basis for the translation as it eventually appears in print.
This paper will examine the nature of this text encoding/translation, rendered through the body and the body’s relation to the unconscious, a kind of sign language resisting external communication through its obscurity, lack of expressivity, explicitly intentional approach towards shallowness, and sense of the mind speaking to itself.
Mathematical and scientific terms and allusions appear frequently in the poetry of Sherman Alexie, including such fields as geometry, cartography, chemistry, and probability. In these poems, the use of mathematical and scientific concepts is part of a broader commentary on knowledge work, and its relation to various cultural and other boundaries, that can be seen throughout Alexie’s poetry (and other work). Karen Barad’s theories of “agential realism” will be especially useful in this discussion, particularly her examination of the processes by which relevance is determined in knowledge work. The paper argues that, in these poems, mathematics is used as both a metaphor for, and an example of, the way that historical, social, and agential relations are implicated in knowledge production.
The “metrical code” refers to the patterns of connotations carried by the meter or meters in poems whose meter varies. For example, Emily Dickinson’s relatively rare lines of iambic pentameter constellate around interrelated feelings and concepts, while Walt Whitman’s constellate around groups of feelings and concepts that are separate from, but overlap with, with Dickinson’s. On the other hand, Whitman’s lines in dactylic rhythm invoke an entirely different group of connotations from his iambic pentameters. Certain poets’ attitude towards meters are part of their style, a wordless language with which poems can talk, resonate, and echo among themselves through the centuries.
Building on the introduction to the metrical code in the first part of the paper, the second part will look at some examples of the metrical code in contemporary poetry and will draw on the author’s personal experience writing poetry in free verse and meter. How have attitudes towards poetic tradition changed over the last century, based on metrical code readings? How do poets of the current generation, and emerging poets, relate to meter? What does the metrical code reveal about shifts in metrical preferences among poets?
Finally, the paper considers some of the larger issues raised by the metrical code: how essential is meter to poetry, based on metrical code readings of a range of poets? What can we learn about structure, pattern, and repetition, and their relation to meaning, from meter, that most ancient of verbal arts whose roots reach back well before writing?
Given the incredible global popularity of Blizzard Entertainment’s World of Warcraft, with a playership now exceeding eight million worldwide, there is still a dearth of scholarship on and cultural critique of the game, particularly looking at race. This paper attempts to identify and interrogate the “racial logics” of WoW, beyond a close-reading of fantasy race as allusion or allegory for real world race, to begin to theorize how race is coded, articulated, and cued. In other words, in a game of fantasy race, how and where and why might actual race and racism be deployed, negotiated, disguised, and taken for granted. What is the connection, if one can be made, between programmatic, algorithmic, gamic race and real world race and racial formation? More specifically, this paper tackles the question, in WoW, why does a troll speak with a Jamaican accent? Alexander Galloway in Gaming says, “Video games render social realities into playable form” (17) and “Play is a symbolic action for larger issues in culture. It is the expression of structure” (16). Moreover, Lisa Nakamura, author of Cybertypes, argues, “When users go online, race dwells in the mediating spaces between the virtual and the real, the visible and the invisible” (144). How then can we challenge and explore this playable form, this structure, this mediating space? Looking at character creation, game play, and game narratives, this paper argues for a productive opportunity in the play of, with, and play in race to discover “disruptive moments of recognization and misrecognition” (Nakamura 144) that can offer a way to unpack race in WoW, both protocologically and politically.
Erich Loest’s novel Reichsgericht (2001) offers an analysis of the relationship between the World Wide Web and its impact on what we understand as “historical truth.” Loest’s novel depicts an historian protagonist with access to a magical URL that allows him to interview German historical figures in his quest to detail the history of the German High Court from its inception in 1879 to its dissolution in 1945. Through the use of the fantastic URL, Loest offers a look into the German cultural reception of digital technology and the role it currently plays in Germany’s stridently contested past. In a manner that comes across as wish fulfillment for those who have tried for years to come to terms with Germany’s dark history, Loest portrays a World Wide Web that allows for the virtual reanimation of the dead through a powerful amalgamation of past and present unique to the Internet in an effort to reveal the truth of twentieth-century German history. The central question I address in this paper is: Can the Internet serve as an arbiter between past and present allowing the user to “pull” an accurate understanding of history from its pages, or is the Internet so powerful a medium and the information offered by it so easily manipulated that it can “push” an agenda-driven history even further into German cultural consciousness than more traditional media can, thereby muddling any possibility of historical understanding? More broadly put, do intelligent machines, when linked together, have the capability to reshape what we have come to know as reality? Loest’s work weighs in on this question by effectively juxtaposing the Internet with other, more traditional media.
This paper compares how codes in the forms of spells, riddles, and prophecies shape perceptible reality in popular fantastic fictions like those of Neil Gaiman, with the use of discrete programs created in virtual worlds like Second Life. In both cases, a correspondence is formed between abstract symbolic systems and “physical” reality. Demi-gods and programmers might seem to represent opposite views both of reality and abstract symbolic code, but as has already been pointed out by such theorists as Florian Cramer in his work on antique algorithmic patterning, these uses of language can be usefully considered as part of an historical tradition.
Comparing the codes used in different media makes the role of abstract symbols as a carrier of secrets and wonder, especially clear and suggests why, contrary to predictions, New Media texts have not replaced older forms. The act of encoding as it occurs both in the minds of the storyteller and coder, and the consonant act of decoding by reader, listener, or viewer enables imagination of what magic must be like, as another sort of translation and transformation.
Computers and the Internet have provided new and innovative ways to challenge traditional gender roles, yet rather than providing alternatives, research shows that gender stereotypes are often reinforced even more strongly in these environments. One of the earliest forays into Internet communication was through online journals, a field often heavily dominated by women users. However, when research began to come out about web writing, this was quickly redefined as blogging and relocated to the male news domain. Similarly, while early work in textual MUDs (multi-user domains) seemed to provide an escape from stereotypical gender norms, gamers quickly resorted to hypersexual textual descriptions of characters. Hypertext/hypermedia have furthered this marginalization of women in technological communication by defining themselves as highly theoretical, technologically difficult, and male-dominated. This panel examines the ways in which both textual and visual technological media have attempted to challenge yet reinforce traditional notions of the gendered body.
Software that supports communication over the Internet has become significantly easier to use and has thus become more inviting to women. One could say that women have broken the code to communicating over the Internet, because engaging in this activity is no longer limited to those with technical backgrounds. We will look at one form of computer-mediated communication in particular, web logs or blogs. Blogs allow users to post daily entries on the Internet with little technical skill required.
Blogs started historically as electronic journals and have evolved into a number of different genres, such as political commentary, personal commentary, and informational. Certain estimates suggest that women and girls are creating blogs in equal numbers to that of men and boys. In this paper, we are interested in determining whether equality in participation translates into equal influence on the Internet. Are blogs created by women and girls afforded equal status to those created by men and boys? What type of blogs do teenage girls create in contrast to those created by adult women? Are there gender and age-based differences in how individuals express themselves through blogs? Within a particular blog genre, are there other stylistic differences between women and men? Is there an electronic ghetto for female voices? What determines who will be heard over the Internet? We will survey the state of affairs in blogging to see what women and girls are saying in their blogs and what factors may influence society’s inclination to take note of these expressions.
Since before the development of the World Wide Web, people have been finding communities online. Because of the anonymous nature of these communities, people have been free to “code” themselves, to develop online identities that have little to no relationship to their identities in the real world. Approximately equal numbers of characters in these virtual worlds (called multi-user domains or MUDs) are male, female and gender-neutral. In her seminal work, Life on the Screen (1), Sherry Turkle found, however, that a fair amount of gender-swapping, in which a man controls a female online identity or a woman controls a male online identity, has occurred so that it is difficult to get an accurate estimate of the number of men and women inhabiting these communities.
Until the past few years, MUDs, such as LambdaMOO (2), were text-based. Inhabitants of these communities used an arcane language to build avatars to represent themselves and to navigate these virtual selves through the online world. In newer communities, such as Second Life (3), avatars are built using a graphical user interface and navigate the online world using the mouse and menus. In both types of online communities, however, women tend to be portrayed as vixens, voluptuous and scantily clad.
In this paper, we will examine the portrayals of female avatars in online communities such as LambdaMOO and Second Life. We will also discuss patterns of communication with avatars of various genders.
(1) Sherry Turkle, Life on the Screen: Identity in the Age of the Internet, Touchstone: New York, NY, 1995.
(2) LamdaMOO, telnet://www.lambdamoo.org:8888
(3) Second Life, http://secondlife.org
Perhaps it is no surprise that Sven Birkerts’s famous essay, “Hypertext: Of Mouse and Man” is considered one of the foundational texts in electronic media studies, for hypertext is often coded as “masculine,” perhaps because of its historical connections to technology and the computer science world. It is perceived as difficult and therefore prestigious and not immediately identifiable with women’s experiences or knowledge expertise. It should be no surprise, then, when a woman scholar in 2007 researches hypertext and hypertext theory that there are few (fewer then ten) peer reviewed articles published on the genre. However, in his pioneering work, Hypertext: The Convergence of Contemporary Critical Theory and Technology, George P. Landow makes a complex argument that seems to make a direct connection between hypertext theory and feminism. In his book, Landow argues that the concurrent introduction of post-structuralist literary theory and computer hypertext created an important paradigm shift which changed the way we think about textuality and human thought. In his analysis, Landow foregrounds hypertext’s emphasis on intertextuality, multivocality, and decenteredness—three characteristics which can be argued to be integrally central to feminist epistemology and theory. This paper will explore the connections between hypertext theory, literary theory, and feminist epistemology through an analysis of one of the foundational hypertexts, Shelley Jackson’s Patchwork Girl to illustrate how hypertext, rather than being a “man’s” genre, is in all actuality a perfect space for feminist theorization and experimentation with textuality, identity, and gender.
This panel emerges from the intersection of science studies with environmental theory/green cultural studies. The papers on global warming discuss the effects of both the emerging scientific models of climate change and effects of popular representations. The papers on Hurricane Katrina and aerial navigation analyze the interplay between material forces and discursive codes. At stake in all of these papers is the status of scientific “truth”—how it emerges, how it is encoded, how it is embodied or practiced, and how it is popularly represented.
Al Gore’s Oscar-winning documentary film An Inconvenient Truth (2006) effectively shows that global warming due to greenhouse gases is real; but this film has been preceded by Hollywood features also showcasing pollution and climate change. In Soylent Green (1973), set in 2022, war and pollution have devastated the Earth, food production is down, and rising temperatures have eliminated winter. Waterworld (1995) and A. I. (2001) portray post-warming worlds inundated by water from melted icecaps; Chain Reaction (1996) and The Saint (1997) show scientists seeking new, non-polluting energy sources; and most intensely, The Day After Tomorrow (2004) shows devastating global warming through extraordinary special effects.  These Hollywood features routinely mistreat the science—for instance, global warming effects would occur over decades, not mere weeks—but they reach millions. An Inconvenient Truth is currently the third highest grossing documentary ever, yet its box office sales are paltry compared to the $540 million for The Day After Tomorrow. Fortunately, research shows that Day After Tomorrow has significantly influenced its viewers toward a more serious consideration of global warming.  The history of global warming on screen suggests that a general cultural awareness of its appearance and effects has long been prevalent; and that although sober documentaries can present the science well, a balance between scientific truth and dramatic need may be the most compelling way to alert people to climate change and similar pressing issues.
The burgeoning scientific literature on paleoclimates in recent years has focused on the extraordinarily complex relationships between biological (including human) evolution and climate change, and in the process has revitalized James Lovelock’s Gaia hypothesis. Rather than the benign, maternal planet of the 1970s pop redactions of Gaia, however, the Earth emerges in recent appropriations of Lovelock’s thesis as a world sliding, probably inevitably, into rapid, slingshot variations in its climate and mass extinctions of many of its life forms. This paper will explore the ways in which Gaia has been transformed by systems theory, notably second-order cybernetics, in the work of Lynn Margulis and Lovelock in The Revenge of Gaia. A sophisticated understanding of climate change in their works resists simplistic techno-political models of “solutions” to global warming and instead forces us to consider the prospects for civilization’s “sustainable retreat” (Lovelock’s term) from fossil fuel economies, high population densities, and unchecked exploitation of the environment.
Hurricanes should present a greater challenge to Western narrative codes than they do. Given the transformation of literature in the wake of World War I, for instance, it would be reasonable to expect not just a distinct (if small) canon of hurricane literature in the U.S., but a fairly well-established set of narrative strategies shaped by hurricanes’ border-crossing geographies, culture-blending histories, and unspeakably powerful ability both to destroy and to nourish life. The truth is that hurricanes have impacted “hurricane discourse” in literature, the humanities, and the popular media far less than in the sciences, reflecting an imaginative poverty that has troubling implications in an era of global warming and rapid population growth. Large hurricanes can cover hundreds of square miles and release an amount of energy comparable to a series of ten-megaton nuclear warheads exploding every twenty minutes, but the dominant Euro-American discourses lose no time in emplotting these vast, ancient, world-altering cyclical storms according to the simplest, most linear, and most anthropocentric of teleologies. What about Hurricane Katrina, though; did it (to paraphrase Bush) change everything? My purpose in this paper is to anatomize hurricane discourse in the U.S. before and after Katrina, examining the storm’s role in generating alternative narratives and counterhegemonic narrative codings of hurricanes. While it would be naïve to claim that Katrina changed an entire society’s way of perceiving hurricanes, along with the racial disparities that Katrina famously “exposed,” I will argue that the birth of a new hurricane discourse may actually be at hand.
This paper will consider the ways in which the physical world has been codified for the purposes of air travel. As part of this study, I would like to briefly discuss how early European navigational practices, which positioned a disembodied body above a grid, both diverged from and intersected with embodied, non-instrument native navigational practices, such as those within the Polynesian voyaging tradition. I will consider how early aerial navigation in the United States was a combination of embodied and disembodied practice, and how radio signals and sophisticated electronic systems or “codes” would come to replace the need for visually specific maps, landmarks and celestial phenomena. By drawing on the work of technology theorists such as Don Idhe, I would like to analyze the ways in which these various codified interfaces both extend and limit human perception of, and experience within, the physical world. Ultimately, I will ask whether the experience of contemporary air travel can only be one of radical separation and alienation, for pilots as well as passengers.
The papers submitted for this panel represent works in progress to appear in “Mummy, Possessed”: Automata and Enchantment in English Renaissance Literature (edited by Wendy Hyman; anticipated manuscript completion date 1/2008), an interdisciplinary collection of essays which explore the automata, self-moving machines, and animated statues that proliferate in 16th and 17th century British Literature. The book project as a whole examines the philosophical, theological, and ontological issues raised by the “living” machines of Renaissance Literature, including the hubristic desire for omnipotence, the meaning of agency or will, the apparent dispensability of the soul, and the perennial question of what it means for a thing to be alive. This panel features the work of two of the book’s contributors and its editor; the papers are united by their interest in considering how animated matter in literature offers new ways to think about authorship, subjectivity, and the relationship between literary poesis and technological forms of making.
This paper examines the ways in which strange fates of Hermione and Perdita, the lost women of Shakespeare’s The Winter’s Tale, trope the technology of early modern dramatic characters. This paper argues that the play represents a series of dispersals and gatherings, as Hermione is picked apart by Leontes’ inquisition and Perdita enters the world as a scanty array of objects—baby, jewels and Antigonus’ letter, “thy character” (3.3.47)—on a Bohemian beach. The metamorphosis of Hermione’s statue, in this analysis, represents the culmination of the process of redemptive mediation and reassembly, as a dispersed network of agents and objects—ranging from the Oracle of Apollo to Anitgonus’ bones—reassembles the broken house of Sicilia.
Paulina’s careful construction of her alcove, complete with curtains, music, costume, pedestal and the invocation of the craftsmanship of Julio Romano, allows Hermione to be simultaneously “like a statue” (5.3.20) and a living woman again. This scene has been read as Shakespeare’s defense of the dramatic author’s art, a magic “Lawful as eating” (5.3.105), but the very complexity of Paulina’s tableau, and the various human and artificial instruments it requires, undercut such a reading. Paulina the dramatist creates nothing new (even the statue is actually Hermione herself) but rather manages and assembles various objects—her audience positioned as carefully as her props—into an assembly that will allow Hermione to live again. Inside a carefully crafted dramatic machine, Hermione is not resurrected so much as she is rebuilt, and the dramatic author is less a poet than an engineer, combining given materials into ingenious new devices.
The automaton emerges as dehumanizing in Erin Labbie’s “Historical Materialism and Automata in Volpone.” Two minor mentions of automata appear in Jonson’s play (a clock, and a waterworks in perpetual motion), both of which establish a mechanical and technological context for the events of the play, and also reflect the metaphorical status of the characters: Volpone, Lady Politic, and several others are all described as automata. Labbie investigates these associations through the lens of historical materialism and Walter Benjamin’s critique of capitalism, wherein the seventeenth-century courtier emerges as an artificial being who, like a clock, marks time as he seeks to satisfy his own inhuman needs. Volpone’s ingenious attempts to fleece everyone around him reveal his situatedness in this new industrial moment, one which turns him into a machine. The courtier is thereby implicated in a kind of perpetual motion that does not progress, like the waterworks and clockworks that become symbolic of his dehumanization. Both of these images contribute to a prescient critique of capitalism within this early Jacobean play.
In the sole lyrical interlude of Thomas Nashe’s Unfortunate Traveller, the book’s peripatetic narrator/professional con-man, Jack Wilton finds himself visiting an exotic Italian estate. The showpiece of this estate is its elaborately constructed garden, in which everything—from the “beautifullest flowers that ever man’s eye admired” to the “clear overhanging vault of crystal”—is a mechanical simulacrum. Most marvelous of all are the hundreds of tiny singing automata, who “though they were bodies without souls,” produce dazzling songs; indeed, so perfect are their voices that “every man there present renounced conjectures of art, and said it was done by enchantment.” Art, enchantment, or science? Wilton doesn’t allow the reader to linger long in mystical reveries; instead dwelling for several hundred words on the minutiae of the birds’ pneumatic construction —until, bizarrely, his evocation of this alternate “paradise” begins to read like technical manual. But the vision of hundreds of ornithological automata, chirping in unison, is not merely a rhetorical showpiece. Instead, in this proto-novel featuring nothing so strongly as its own verbal pyrotechnics, it becomes an emblem for language and its gorgeously deceptive potentialities.
Meta-literary attention to the techne of language is not unfamiliar to scholars of Renaissance poetry. But what we have not taken account of is the regular appearance of the trope of the mechanical bird precisely in these literary sites of figural deception. This paper will explore three separate appearances of the singing automaton: the mechanical birds in Spenser’s Faerie Queene; those hydraulic creatures that appear to perch on the boots of Marlowe’s Hero (in his epyllion Hero and Leander; and those of Nashe’s Italianate garden.
In his work on cinema, Gilles Deleuze wrote that we are now confronted with a crisis in representation. His argument, both theoretical and historical, was that out of the genocidal terror and extreme violence of the second world war, new forms of images, and also life, were now emerging. “Automata”—both cellular and machinic—had infected our screens, and perhaps recombined with them. But if this recombination and emergence was based in the post-war milieu, it also had its legacies and genealogical inheritances in many other historical strata. In fact, it is precisely those other sites that offer the possibility for recombination, both producing and challenging this condition.
He is, of course, not alone in uttering such statements. It is one of the underpinning conceptions of “new”, “digital”, and “interactive” media as well as of techno-science and bio-tech, that we are in the midst of unimaginable transformations in relations between bodies, technologies, and signs. Such a situation demands new concepts that might engage, convolute, and complicate these relations. For as even Deleuze noted, this situation was both an ethical possibility for new forms of being in the world, and a potential terminal threat to life, itself. The life, or death, of both cinema and thought were contingent on this battle with “information”. A warning and a promise.
Taking up this challenge, at the very site that Deleuze rendered open—the intersection of media and life—this panel seeks to develop both ethical and historical imaginaries of our contemporary condition. Traversing the histories and philosophies of cinema, cybernetics, literature, and evolutionary biology, these papers all invest themselves with correlating life, animation, abstraction, and temporality. Our shared concern is in thinking, over time, about those sites that animate processes, rethink materiality, and critically engage with reformulating ideas of representation. Collectively, we seek to examine the emergent relations, and the sometimes tortured histories that inform, convolute, and re-make our relationship to screens, machines, and other bodies. Our aim is to both expand theories of “new” media and its relationship to life, and to engage the relationship between ethics, aesthetics, and difference.
“New Media Studies” recently has emerged as a sort of quasi-field or discipline, located at the nexus of film studies, history of science, communications, and art history (and perhaps a few other disciplines). At the same time, though, many recent attempts to define a field of new media studies have focused almost exclusively on digital and visual new media, such as digital film, digital video, digital music formats, internet art, and virtual reality installations. As a consequence, the defining characteristics of new media have been understood primarily through the lens of digitization. Such an approach neglects one of the other key “new media” of the twentieth century—namely, the development of biological cell culture technologies and the linkage of such technologies to digital media. This presentation employs examples of recent “bioart” projects as an occasion to work toward a more general theory of new media: one that is capable of understanding both the newness of recent biological and digital new media in particular, but also the “newness” of media more generally (or, to appropriate and slightly abuse Carolyn Marvin’s felicitous phrase, the newness of “old media when they were new”). Biological media offer such an opportunity for rethinking, I argue, precisely because they emphasize the capacity of media to encourage systemic transformation. Understanding this latter capacity of media requires in turn a new philosophy, one that understands “transmission,” “translation” and “influence”—those concepts by means of which media often have been understood—as special cases of what Gilbert Simondon calls “transduction” and “individuation.” Such an approach, I argue, allows us to account for the specificity of our contemporary digital and biological new media, as well as the “newness” of older media (such as printed texts or telephones).
This paper will revisit the question of temporality and abstraction by looking at Hans Richter’s and Viking Eggeling’s abstract films in the years after WW I and in the context of the Zürich Dada group. The non-representational images of early abstract film challenged dominant ideas of the nature of cinema by separating the latter from photographic recording. Rather than facilitating mimetic, anthropomorphic identification, these films and their geometrical, inorganic forms (squares and rectangles in Richter’s case, and lines in Eggeling’s case) confront the spectator with non-organic temporality and movement, probing alternative ways to affectively relate to the moving image.
Hans Richter’s first abstract film Rhythmus 21 stands at the end of a number of attempts to create a ‘universal language’ along the lines of music and ideas of ‘life.’ In their scroll paintings, Richter and Eggeling had developed an elaborate system of time-based forms, grounded in an active, memory-based reception. Accompanied by intensive studies of Henri Bergson and other vitalist texts, they produced two different models of ‘non-organic life’; in other words, they incorporated cinema’s own ‘life forces’ of movement and rhythm into their films without subjugating it to human(ist) experience and form. This paper will question common notions of abstraction in film by situating Richter and Eggeling within vitalist discourses in science and in art, and by investigating the idea of the non-photographic trace as it also appears, for example, in Etienne-Jules Marey’s chronographies.
Most geneticists and their critics evaluate the concept of the gene from the perspective of the gene as a code for the production of phenotypic traits, what has become known as transmission genetics. It is increasingly obvious, however, that this conception of the gene’s role in development is much too limited. This paper will use recent insights from evolutionary and developmental biology, media theory on the techniques and processes of animation, and Paolo Virno’s political economic reflections on the role of virtuosity in informational societies, to promote a counter history of late twentieth century genetics: the gene as animating agent or initiator of specific sequences of events. Important to this argument is a model of temporal change based on animated processes, where time is easily visualized as changes in degrees of organization, or complexity. This position stands in contrast to filmic models of temporal change that rely more heavily on the photographic capture of spatial displacement. This change in how one conceives of the molecular and cellular temporality of organism lends itself to a developmental biology where codes initiate inorganic gradients that in turn create folds, chambers, organs, and even bodies.
Early modern books purporting to teach basic arithmetic frequently depend on narrative worked examples, familiarly known as “story problems” or “word problems.” Presenting domestic scenes, gender, rank and class, romance, courtly love and courts of love, inter-religious encounters, debauchery, and cozening, these narratives are remarkable for their non-mathematical specificity. In this paper I take up the very function of imaginative narrative in mathematical pedagogy, argue for its failure, and explain its abandonment. Such detailed narratives, similar in content and tone to commercially successful collections of novelle and fabliaux, argue for their own success with an audience that wished to teach itself basic arithmetic for use in everyday life, and drawn to a familiar form of printed entertainment. For all their entertainment value and commercial success, however, these texts were incapable of rigorous mathematics, and their narrative specificity seems to be directly to blame. Though frequently called “rules,” the narratives do not present general cases, and are often so mathematically and linguistically inconsistent (many such texts are translations), that there is a sense in which these books don’t teach mathematics at all. They do, however, gesture (though confusedly) towards an increasingly civic-humanist approach to mathematics that also appealed to their primary audience and was consistent with the moralizing strain of popular prose narratives. Eventually, mathematical tales were abandoned in favor of other generic forms, including the humanist dialogue and adaptations of Euclidean proof, as well as new forms of mathematical notation that signaled the algebraicization of arithmetic and its pedagogy.
Invented in the late sixties, string theory has grown to dominate the field of theoretical physics by promising to reconcile Einstein’s general relativity, which describes the realm of the very large, with quantum theory, that of the very small. It posits the string as the basic constituent of both matter and energy: a tiny open or closed filament vibrating in multiple dimensions, whose tension determines the type of subatomic particle it manifests. The scale of the string is 10-33 centimeters, the Planck scale, a realm well beyond the capacity of contemporary particle collider technologies to plumb. Thus lacking in prospects for experimental validation, string theory currently stakes its legitimacy on its mathematical consistency. Simultaneously, since the late eighties, string theory popularizations—nonfiction texts authored primarily by string theorists themselves—have come out with increasing regularity. These popularizations aim to explain the theory to a lay audience, in part by introducing its key concepts stripped free of the constituting mathematics. Paradoxically then, since popularizations omit precisely the content that would grant the theory whatever scientific authority it hopes to claim, popularizations effectively present not physics but metaphysics, an imaginary that must resort to literary techniques to legitimize its objectivity. This paper will examine three examples of string images from popularizations and the strategies the texts employ to substantiate them. Using concepts from Gaston Bachelard and Michèle Le Doeuff, I argue that these string theory popularizations substantiate the string as an object through its contextualization within what I call a ‘domesticated mesocosm’, an imagined space that juxtaposes micro- and macrocosms by analogy through graspable objects on human scales, objects laden with affect. As such, these string theory popularizations transform the utterly alien into something approachably familiar.
How does genetic coding shed light on the biological kinship between humans and animals? What is the influence of genetic and/or biological factors on posthuman models of subjectivity? How does a reevaluation of animal languages affect debates about animal rights?
These questions are important because the connections between genetic coding and animal languages narrow the distance between what is human and non-human. We suggest that any confusion and/or pollution of the boundaries between human and animal produced by cyborgs or other manifestations of posthuman theory should thus be seen as productive, for this confusion allows us to critically examine the liberal humanist values that inform the genetic (re)coding of subjectivity.
Using Jeremy Bentham’s notion of fictions and Kenneth Burke’s concepts of hierarchical mystery and perfection, this paper examines how the medieval concept of the Great Chain of Being, albeit in a new form, maintains a political structure used by humans to maintain dominance over animals. Specifically, the concepts of both Cyborgs and Cryptids are used to explore how science and folklore politicize the nature of Being in the contemporary world.
Viewed as discourse, the Great Chain of Being seeks to perfect the world. However, the epistemic rhetoric of the model does not always neatly match the ontological experience of the world. Therefore, humans have created beings, both “real” and fictional, to try to close the gaps, or provide “missing links.” Cryptids, especially quasi-humans or quasi-primates such as wodewoses or yeti, demonstrate how humans have used fictions to fill in the perceived gap between humans and animals. Cyborgs, conversely, use technology to either “complete” a human (through surgery or prosthesis) or elevate a “lower” animal to a more human like status (through genetics, as in the Onco-mouse; or through computers, as in some ape language studies). The paper concludes by addressing the political implications of such a linear and hierarchical model being used theoretically and ethically to articulate the nature of Being for human and non-human animals.
As the narrator of Samuel Beckett’s Watt indicates, the subject of Western humanism has suffered a “loss of species.” This is a loss keenly felt by Emmanuel Levinas, who, in the wake of modernist antihumanism, reaffirms the intelligibility and ethical centrality of the humanist subject. He locates human uniqueness in the face, the essence of the Other whose meaning “consists in saying: ‘thou shalt not kill.’” For Levinas, the link between language and the face forecloses the extending of ethical subjectivity beyond the human realm.
Although his philosophy, with its emphasis on the alterity of the Other, would seem particularly well-suited for consideration of the nonhuman Other, Levinas insists on the uniqueness of the human face even when faced with Bobby, the dog who befriended him in a Nazi camp. Bobby, as several critics have noted, problematizes Levinasian humanism, yet Levinas reads his “friendly growling” as silence and thus denies him a face. Given Beckett’s influence on post-war French philosophy, it is significant that Levinas cannot come to terms with the canine face, for dogs and their would-be masters frequently meet face-to-face in the posthuman landscape of Beckett’s fiction. In Watt and Molloy in particular, encounters between individual dogs and humans are overdetermined by an interspecies intimacy which marks even their excrement. In exploring the implications of this intimacy, Beckett’s human goes where Levinas fears to tread. I argue that Beckett’s dogs undermine the centrality of language and expose the need for a posthuman ethics which accounts for them.
In The Call of the Wild (1903) and White Fang (1906), Jack London’s narratives of co-evolution and co-operation for survival between humans and dogs pressure the boundary that separates animals from humans and suggest a shared genetic coding between human and non-human subjects. Using systems theory, I explore the idea of a posthuman subjectivity in London’s dog novels. Here, the subjectivity of dogs is an evolutionary process of “becoming,” rather than a fixed biological type. This process is guided by autopoiesis, defined by Lynn Margulis and Dorion Sagan as, “life’s continuous production of itself.” Shaped by environment and heredity, the dogs’ experiences reflect on past evolutionary states that relate to the self-reflexivity of autopoietic systems. The perception of dogs is constituted through a field of observation that incorporates them in the act of constructing meaning. By “rewriting” animal thinking into language, London’s narratives thus enable us to reexamine how evolutionary theory affects non-human agents.
Yet London becomes caught between challenging humanist definitions of race, class, and gender and endorsing a picture of evolutionary development that could secure a national identity. The posthumanist rupture in subjectivity that London emphasizes in the biological kinship between humans and dogs can only be resolved in the closing violent fantasies, expressed in both novels, that act out desires for the restoration of a stable social order. The paper concludes by considering the ethical implications of animal subjectivities, examining the idea of animal rights as a politicized (re)formulation of biological kinship.
In her forward to Marjorie Spiegel’s The Dreaded Comparison (1988, rev. ed. 1996), Alice Walker writes that “the animals of the world exist for their own reasons. They were not made for humans any more than black people were made for whites or women for men.” While Spiegel’s book provides a brief and highly politicized history of the relationship between non-human animals and human slavery, this panel seeks to more fully historicize and analyze the social implications of the complex and contested demands for similarity and/or difference between ‘man’ and ‘beast’ in American history and culture. Drawing upon critical analyses of race, social structure, and power, these papers demonstrate how human ideas about and practices toward non-human animals were implicated in broader racial projects.
This paper examines the centrality of human ideas about animals and animality to the process of racial formation in early republic and antebellum America. It interrogates some of the myriad visual and textual representations of the resemblance between monkeys, apes and humans in an expanding popular and print culture, with a special focus upon advertisements for and audience reactions to animal exhibitions. These broadly popular entertainments included both scientific displays of anthropoid apes in museums and theatrical animal acts featuring performing monkeys. These exhibitions served as sites where, as Jennifer Ham writes, “continuities and discontinuities between man and animal could be dramatized.” Monkeys and apes also figured in newspapers, natural histories, periodicals and children’s literature. Accordingly, I also examine literary accounts of non-human primates, which, like animal exhibitions, prompted observers to reflect upon the boundary between ‘man’ and ‘beast’ and to use the natural order to comment upon a fluid social and political order.
Although one can find many human concerns that accreted around the exhibition of non-human animals, this paper is particularly interested in the ways in which ideas about monkeys and apes were implicated in discourses about and practices of slavery. A persistently negative association of human ‘others’ with non-human animals helped support chattel slavery at the same time that slaves and abolitionists drew upon “the dreaded comparison” in order to oppose the peculiar institution. While the relationships drawn between non-human animals and human slavery were complex and contested, they help illuminate our understanding of racial formation in America.
This paper investigates theories of human and animal labor in the Antebellum South with regard to race, class, and slavery. Specifically, it is an analysis of the metaphorical conflation of slaves and livestock found most frequently in abolitionist propaganda, but also in defenses of slavery based on legal interpretations of goods and chattel. Rather than approach this “dreaded comparison” as a literary construct, however, the paper examines that equation as a constitutive fact of the labor system that dominated the Southern states. In the words of the Jamaican planter, John Pinney, “slaves and stock” were the “sinews of the plantation.”
 As much as their labor powered the plantation, the capitalized bodies of slaves and stock funded the continuation of that system.
This nexus of slaves and stock is particularly well documented in the development of the thoroughbred, a racialized construct of the Atlantic world, in the United States. As a commodity, thoroughbred horses traveled south and west with slaves and north with raw materials, covering an expanding geography with an increasingly circumscribed gene pool. Unlike the slaves that went with them, these horses carried a record of their lineage. Printed in newspapers and the burgeoning sporting press, a republic of letters for American breeders, and disseminated through the spectacle of racing, the rhetoric of horse breeding provides a significant perspective on theories of labor, as related to slavery and race. By looking to the controlled breeding of horses, this paper examines not only the value of labor, but also the value of the laboring body in a market for bodies. Further, it elucidates a relationship between breeding and slavery embedded in the contested nature of labor and the physical body of the thoroughbred horse.
Ideas of Alaska in the late nineteenth and early twentieth century functioned as a space for Anglo-American men living in the southern United States to fantasize a utopic anti-civilization: a resource-rich, but forbidding landscape in which values of “Strenuous Life” masculinity were to be allowed full scope, and a libertarian democracy centered around a “sourdough code” of frontier law would spring up. Coded in this conception were ideas about the whiteness of the men who would prevail in this space, as prominent scientific and pseudo-scientific racists such as Madison Grant and Louis Agassiz spoke of the advantages of the Alaskan climate for men of Nordic and Anglo ancestry.
An important part of this construction was the relationship between the neonative white Alaskan and the sled dogs which enabled their travel in their North. In my paper, I will examine novels written for popular audiences—both juvenile and adult—and memoirs of men who spent time in Alaska during and after the gold rushes at the end of the nineteenth century. These popular narratives of white Alaskan experience employed the figure of the sled dog, and the relationship between the dog and his white owner/musher, to demonstrate ideal Alaskan domestic and working configurations, dreams that spoke directly to perceptions of growing dehumanization and competition in the working sphere of mainstream America. The settlers also used their relationships with dogs to illustrate key differences between themselves and the native Alaskans they found inhabiting Alaska when they arrived. By employing newly articulated late nineteenth-century anticruelty rhetoric, white neonatives claimed the label of “human” for themselves, while relegating native Alaskans and undesirable “others” to the denigrated realm of the “animal.”
Toni Morrison’s novel Beloved, set in the late nineteenth century, makes graphic the ways that slavery in the U.S. was supported by a discourse of animality which marked Africans and people of African descent as less then human by equating them with animals. Not only Beloved but also Morrison’s other novels show the ways this equation lived on for generations, permeating and infecting the identities of African Americans and the cultural concept of race long after slavery was abolished.
In this paper, I will examine some of the patterns of response to the discourse of animality that emerge in Morrison’s novels. For example, a number of Morrison’s female characters struggle with a sense of alienation from their bodies and sexualities, an alienation linked to the fear of appearing animalistic. Several of her male characters show a tendency to conceptualize women — especially women who will not comply with their wishes — as prey animals. However, Morrison’s novels do not respond to the destructive persistence of the discourse of animality by repudiating any kinship or likenesss between her characters and nonhuman animals. In fact, Morrison’s overall vision suggests that a connection with animals can also be a source of strength. Interestingly, a female character in Jazz turns the negative connotations of women as prey animals on its head, noting that the women she reads about in the newspaper, dominated and abused by men, are actually stronger and more apt to defend themselves than she once thought—qualities she equates with their animality. And a number of characters make important distinctions between equations with domestic animals, which tend to be disempowering, and equations with wild animals, which are sometimes empowering. Ultimately, I will argue that Morrison’s novels question and problematize the equation of African Americans with nonhuman animals in an innovative way that values what humans — all humans — share with other animal species.
Panel members will present documentation and engage in discussion of an ongoing series of mural workshops taking place in Portland, Maine. The workshops make use of open source technologies developed by the Graffiti Research Lab in New York City and incorporate GISci (Geographic Information Science) techniques and methodologies to do aerial drawings inspired by urban forms of writing and street art. The acronym SUBONE stands for Supplying Urban Beautification Offering New Experiences; it serves as a logo for the workshops and a moniker for their founder and director. Panel members will address the ways in which the landscape is encoded with cultural value systems and will describe the process of geo-coding and its relevance to the project.
Jan Piribeck will show examples of two collaborative ventures with SUBONE. The first is an aerial drawing in which the shape of the SUBONE logo was traced in a large green space located on the Portland peninsula. A GPS (Global Positioning System) data logger was used to plot the points of the graphic. Two USM Art students assisted with this project, which was approved by the Portland Percent for Art Committee and supported by Portland Parks and Recreation. The second collaboration is an LED (light-emitting diode) sign that was created for an exhibition called “Lost Sites” in which Piribeck worked with SUBONE on an intervention in an obscure inner city site. Piribeck’s research and creative work is developed around interplay between visual studies and geographic information systems. She will discuss the term geo-coding, a process by which the “real” world is translated into computer readable form, and will describe the ways code is used to analyze, interpret and activate the cultural landscape.
Tim Clorius, founder of SUBONE Urban Murals, will discuss the genesis of the project and its relationship to the “codes” of international graffiti culture and post-graffiti art. He will show examples of and describe the ideas behind community mural projects he has done in cooperation with arts coordinator, Andrew Coffin under the auspices of public agencies such as Portland Parks and Recreation, the Maine Arts Commission and the NAACP (National Association for the Advancement of Colored People). Clorius, who is an accomplished painter, has a long-standing interest in community oriented art projects. In his paper, he will address how his mural workshops cultivate creativity and decode the messages embedded in the language of graffiti, thereby cutting through stereotypical readings of a potent and ubiquitous form of personal and social expression.
Chris Thompson, critic and Assistant Professor of Art History at Maine College of Art, will broaden the historical perspective on the panel topic and will help place the collaborative work of Clorius and Piribeck within the framework of contemporary art and culture. Thompson received his Ph.D. from Goldsmiths College, University of London and teaches a range of courses in modern and contemporary art, cultural history, critical theory and visual culture.
How do we occupy the virtual? Can we treat avatar bodies as displays of embodiment, as inscriptions of corporeality? This panel examines the problematics and practices of embodiment in virtual worlds.
In the contemporary context, artifacts which have been traditionally conceived in terms of a unique deictic presence—here and now—take place differently via forms of technical reproduction, appearing not simply as a plurality of individual instances, or a consecutive seriality, but as something both spatially (and temporally) distributed and mass-like (massenweise). They take place not as a mere collection of unique occurrences, but within a logic of supplementarity that circumscribes and enframes the possibility of origin, which at the same time recedes. Walter Benjamin’s problematics of aura are remapped from the claim to authenticity linked to the materialities of an originary instance to the ubiquity of artifactuality, within which the very claim, itself, takes up the place of the authentic, giving way to a reinscription of the auratic in every instance of reproducibility.
It is in this sense that the notion of the cartographic reappears as a tacit condition of reference to an absent, and sometimes irreal, embodiment. The consequences of mapping, between biological and technological registers, have led to curiously imprecise accounts of embodiment, from prosthetic extension, to the ergonomic extraction of labor, from stumbling robots and cumbersome cyborgs to remote operators, avatars and conversational agents. I will present a series of notations as an initial attempt to chart certain points (areas, territories, states) that might be addressed in remapping or modeling a history or genealogy of biological-technological embodiment.
Since 1978, with the advent of the Multi-User Dungeon, there have been multi-user interactive spaces in which people have congregated. Of course, as technologies have created greater verisimilitude of representational embodiment in online spaces (from chats, online gaming to Massively Multiplayer Online Environments) visceral practices are logical extensions of these social spaces. If we are in and era of “Bodies Without Organs” (Artaud, loosely), then what are the issues of virtual embodiment in the online? And, taken in context of the immediacy of the body in Performance Art, why are online spaces, especially those not with dominant demographics in adolescent age ranges so concerned with performing visceral practices?
In this discussion, the author will consider the issues of virtual embodiment, the reiteration of the technosomatic viscera in online worlds, and VR performance art practices. This will include contemporary studies of embodiment, previous works in VR (Davies, et al), and current work in performance art in MMO spaces like the online VR world, Second Life.
Bodies trip and fall over one another in the physical world. Tripping involves nodes: push above the knee and the body falls back; somewhat below, and it falls forward. Nodes are concrete demarcations of flesh within physics. I will present the mappings and remappings of body nodes—the literally inconceivable retopography of the body, the body within or beneath untoward stress, untoward spaces. The presentation is multi- media; it works through ’edge phenomena’ in Second Life in combination with motion capture sequences based on sensors reading out to infinity at highspeed. Figures are also remapped onto or within abstracted spaces representing a kind of tensor calculus of the flesh. The implications of these remappings are many, including new ways to represent data of any sort in terms of unknowattempt to make sense of the world.
In Code and Other Laws of Cyberspace, Lawrence Lessig argues that the net has no nature, that it does not have an essential characteristic with regard to its freedom. Instead, he claims, the internet will have whatever characteristics we give it, limited only by the physical properties of the materials that comprise its physical layer and the ingenuity of the coders who design its parameters. Following Lessig’s claims about digital technologies’ indifference towards concerns of freedom and constraint, I argue that, far from being the domain of freedom thinkers such as Richard Stallman once envisioned, networks and the technologies of which they are comprised are amenable to capture. Moreover, in networks’ capacity to act as what McKenzie Wark calls a second nature, a “naturalized” set of moral codes that constrain how we operate in the world, their potential capture by those who would limit our abilities when using it is a danger to future culture production. Like primary Nature, the set of rules described by the codes of networks delimit how we interact with objects within them, namely cultural productions such as texts, music, and images. Whereas Nature, however, acts outside of the direct control of its constituents, the processes of the protocols of the network are by necessity amenable to the control they were once imagined and created to resist. Their capture by institutionalized power (a movement that is very much underway), I argue, would mean the foreclosure of the future and the establishment of an endless present as the relations between cultural objects remain static and cultural production turns to the construction and maintenance of the same. This paper examines not only the means by which this capture is taking place, but also makes suggestions for its avoidance.
In the wake of communication technologies’ global ascendency, the concept and practice of technological legislation has emerged within legal discourses as a predominant locale for the study of the relation between enabling digital codes and governing legal codes. Overlapping in time, the intensification of technical powers of molecular manipulations in material, computational, medical and manufacturing domains have presented actors in the legal world with parallel challenges to the meaning, significance and relevance of legal codes. Between politico-philosophical literatures (Langdon Winner), media studies (Jean Baudrillard), legal studies (Lawrence Lessig) and speculative engineering (E. K. Drexler) traffic in the concept and material articulation of “code” maps a potentially useful set of related concepts, problematics and themes germane to codes of law, cyberspace and molecules. Using this field of tools, this paper will present the outline of a theory of governance through code that privileges legal preeminence and revalues scientific knowledges and technological artifacts as objects of a politicized art criticism first, and operational, instrumental means second. In this theory of code-based technological legislation, active political discrimination and heightened powers of choice based in desire trump traditional reactive decision-making based in fear and perceived necessity.
Don DeLillo’s novelistic output is replete with characters who engage in cryptography as a kind of existential fetish. Robert Hopper Softly in Ratner’s Star who heads the Logicon Project devoted to articulating a definitive mathematical meta-language, Lyle Wynant in Players who tunes in to the transcendental aura of the symbols on the stock ticker, the cultists in The Names whose murder victims are chosen according to their initials; all of these characters illustrate variations on the desire to reduce the chaotic polysemy and infinite jestingness of language down to a brutal equation of sign and signified. DeLillo carries out his most explicit treatment of this theme in his most popular and accessible novel, White Noise. The title of the novel has a cybernetic referentiality in its allusion to Noise as the opposite of Code. White noise is the condition of maximum entropy wherein all messages are equally probable; the televisual horizontality of all semiotic values which characterizes the background of the novel’s hyperreal atmosphere. Against this sprawling triumph of incertitude and equivaluation, DeLillo’s characters seek out codes as structures that promise resolution, integration, and totalization. Jack Gladney’s communion with the ATM, his scholarly investigations into the spell of Nazism, and his seduction into patterns of consumerism are symptoms of his desire to become encoded—to be a cipher in the code or to be someone who knows the code—as a way of escaping the contingency and existential imperilment of the post-Babelian, post-Saussurian condition of perpetual deference in which linguistic being is open-ended and poetic rather than authoritarian and codical.
Since the early days of detective fiction in the 1840s, the solving of riddles and puzzles, and the breaking of codes, has characterized the crime genre. Accordingly, the crime fiction detective has often been compared to other professionals specializing in finding patterns in large quantities of information, such as investigative journalists, humanities scholars, cryptologists, and other scientists. One of the most successful crime writers of the 2000s, Dan Brown, has made codes and puzzles fundamental to his stories, and taken the mentioned comparison literally by letting academics be the detectives of his novels.
In Digital Fortress (1998), the main plot involves NSA’s supposedly invincible code-breaking machine, which encounters a code it cannot break. A cryptographer/mathematician is given the task to break the code. In Angels and Demons (2000), a symbologist and a CERN scientist follow an ancient trail of symbols around Rome in order to save the Vatican. In The Da Vinci Code (2003), a symbologist and a cryptologist decipher riddles and puzzles in order to solve a murder and reveal a hidden secret. And in Brown’s forthcoming novel, “The Solomon Key” (title referring to NSA cryptologist Solomon Kullback), starring the symbologist from the previous novels, more puzzles and code breaking will allegedly be involved.
In this paper, I will explore the function of codes, and of the deciphering of codes, in Brown’s novels, in relation to the crime genre and its conventions. This is a pilot study for the research project “Science in the Crime Genre.”
One Hundred Years of Solitude is frequently acknowledged as the best 20th century novel written in Spanish. This novel is also considered the best example of magical realism, and its author, Gabriel García Márquez, as its best representative.
Magical realism is a Latin American literary movement that explores the relationships between the “factual” and the “unreal” worlds. While the “unreal” is portrayed by myths and legends that continually flow throughout Latino American culture, the “factual” is mostly provided by European centered knowledge, where science plays an important role.
The fascination created by new scientific discoveries in OHYS’s characters’ imagination, and the assimilation process such discoveries undertake in Latin American culture and thought represent an ideal discursive tool for García Márquez. Scientific knowledge becomes common ground where both worlds collide. Macondo’s eternal ambivalence between the mythological past and the technological future is shown to us by the process of coding and decoding science.
This paper studies such interactions, and also explores questions as: is there any process for digesting European centered scientific knowledge into Latin American culture in OHYS?; does science play a role in the developing and eventual decline of Macondo?; considering Macondo as an idealistic rebirth of Latin America, does science have a place in the construction of a free Macondo?
The SLSA theme for 2007, Code, derives etymologically from the Latin codex—book or tree trunk. This etymology draws attention to the materiality of the book: as paper, as tree. In a similar way, N. Katherine Hayles has proposed the efficacy of “materialized writing”—texts that draw attention to their material conditions as texts: as written, designed, and read according to certain textual norms.
In this paper I use Hayles’ work on “materialized writing” and Donna Haraway’s work on “cyborg writing” to consider Shelley Jackson’s “Skin” Project. Briefly, the idea behind the project is that Jackson’s 2095 word short story will be published only once, tattooed word-by-word onto the bodies of individuals—henceforth known as “words”—who choose to participate in this project. Participants are understood not as “carriers” of words but as their “embodiment.” This work evinces a new way of reading. It explores a new way of thinking about print media, visual texts, and their intersections and collisions on a globally accessible digital screen. In doing so, “Skin” queries the relation of bodies to pages, transferring the materiality of the text onto the material body. In the global-digital age, this new form of embodied textuality calls for considering the material relationships between text, code, and bodies.
This paper suggests that biological archiving provokes and reconstitutes subjectivity and experience contributing to what Joseph Dumit refers to as “objective-self fashioning…the set of acts that concerns our brains and our bodies deriving from received-facts of science and medicine.” Susan Squier expands on such notions of objective-self fashioning by asking us to consider what she terms “liminal lives,” beings “whose new [identities challenge] the accepted time frame of a human life as well as the accepted notion of civil status available to human beings” (3). Such beings now include frozen embryos, stem cells, and human cell lines, what Kaushik Sundar Rajan refers to as “biocapital.” In nineteenth century America, such biocapital included slaves. These examples, spanning over 100 years, suggest that the boundaries of the cultural and the biological remain fluid and are continually under revision. Crucially, however, such scientific and technological revisions also consistently focus on the codification and reorganization of race, gender and reproduction.
I explore such concerns through two novels, Mark Twain’s Puddn’head Wilson (1894)and Octavia Butler’s Dawn (1987). I argue that a narrative bridge exists between the fingerprinting revolution suggested in Pudd’nhead Wilson and the genomics revolution posited in Dawn. Each novel charts a technological surge that attempts to counter the repeated resurgence of the body represented through the hypermediacy between bodies and technologies. Fingerprinting creates a biological archive, a system and medium to classify and encode identity that, by the late twentieth century, shifts into the surge of technology associated with genomics. That scale of interpretation, of gene scanning and physical mapping, is then transformed by Butler back into the body itself—the ultimate dividing and sorting machine. Most importantly, however, technology in Dawn becomes deeply engaged with, in fact inseparable from, sexual desire and reproduction, which informs a new synthesis of Twain’s understanding of biology, technology, and acquisition in Pudd’nhead Wilson.
While aesthetic practices in photography, film and music have undergone massive transformation due to affordances provided by computational tools, the practice of creative and critical writing has remained largely unaffected (word processors aside). This paper explores the potential impact of algorithmic tools on the writing process and consequently on the way in which we must read and analyze computationally inflected texts.
As ‘source’ material for our investigations we use a series of ‘digital texts’ created by Brown University undergrad and graduate students enrolled in the Spring 2007 ‘Electronic-Writing’ Workshop. Exploring the results of three exercises in generative writing – recombination, context-free grammars and Markov-chains – we focus both on our own observations of the output and students’ descriptions of the use and impact of computational techniques on their own writing, reading and critical practices.
Since the works in question are open in multiple senses (publicly available as web applets, ‘template’ files, and source-code), our analysis proceeds at multiple levels: from the surface text, to the intermediate ‘grammars’ employed, to the program code, to the multiple layers of software and hardware that constitute our experience of the text.
Our paper examines generative literary practice from three perspectives: as affordances and tools for practicing writers; as a pedagogical strategy for teaching procedural practices to humanities students; and as an emergent ‘text’ for critical interpretation of contemporary (digitally-encoded) literary practice.
The emerging field of chaos and complexity studies comprises works in many fields. The universality of what are considered complex phenomena suggests that human systems in general are shaped by seemingly “chaotic” scenarios. Rejecting reductionism and determinism, chaos and complexity theory favour a holistic embrace of complexity and flux.
Just as we cannot define the working of a human brain by analyzing an isolated cell we cannot interpret a work of art unless we take into consideration the dynamics between its individual elements. In poetry, these dynamics appear in such techniques as symbol, metaphor, motif, and irony – essentially in any linguistic device that forces us to understand and experience one thing in terms of another.
German doctor-poet Gottfried Benn was especially invested in the invention and application of new techniques during the early 20th century. His avant-garde approach is striking; both in the form of his poems (montage) as well as in content: in a Nietzschean zest for life, and analogous to the nature’s relentless thrust to create life, the self expands in a state of intoxication to embrace chaos.
I argue in my paper that poetic creation can be likened to a natural phenomenon. Since our search for order is based on the chaos we perceive around us, chaos is in fact the prerequisite for order. It is for this reason that solace can be found in it: chaos lies at the very bottom of the creative act and therefore of life, and holds within in it an exquisite promise of transcendence.
Invisible ink protects a written message from the eyes of undesired readers, a special code, or secret characters deny access to persons who are not familiar with this sign system. Since the 20th century, when secret messages are transmitted by radio-waves, when confidential data is circulating through the internet, this information is accessible to any interceptor and cryptography faces new challenges.
One of the most prominent cryptographic methods was developed by the engineer Claude E. Shannon. He showed in his seminal and for many years as “top secret” classified paper on the “Communication Theory of Secrecy Systems” that disguising signals as the mere noise of a communication system can protect information. Any interceptor would be confronted with the problem to decide whether a signal is intended or whether it is just noise.
In my paper I claim that modern poetry constitutes a kind of “cryptopoetics” that relates to such coding strategies of secrecy systems. Especially authors of concrete poetry like Max Bense or Eugen Gomringer develop a form of poetic expression that is based on Shannon’s theory of communication. Concrete poetry does not aim to conceal a hermeneutic meaning, but it exposes its own medial character in a way similar to modern secrecy systems. Secrecy systems juxtapose coded messages and distortions of a communication channel; similarly, concrete poetry presents signs and their noisy environment as equally meaningful, thereby initiating a vicious reading-process that resembles the work of a cryptoanalyst, who is confronted with a perfectly coded message.
Barthes spoke of text as a “galaxy of signifiers” for which our only map is the cultural, visual, intertextual, semic, symbolic, proairetic, action, and an expanding network of other codes that inform and enhance the narrative potential of texts. As the scholarship in this area responds to the new technologies and new techniques for creating image and text narratives, examples of its potential for creating global cultural codes are presenting new possibilities and new art and narrative forms. This panel seeks to explore examples of text and image objects in order to discover and to map the secret codes of their narrative power and their potential as harbingers of a truly global age.
Australian performance artist Stelarc has been representing the post-evolutionary body for decades now, and his various actions have one thing in common: they present a disturbing new horizon for embodied humanity. In his actions, voluntary muscles are rewired to be controlled not by his own mind and consciousness but by transmissions received from the Internet and connected directly to his muscles. His stomach is transformed from a site of digestion to a forum for a sculpture to blink light and film the insides of his body for the sake of art. In these and other performances, Stelarc’s art means to foreground the obsolescence of the human body, to recode it as just another site of consciousness, an increasingly obsolete site for it. His collaborative designs, sometimes integrating state of the art technology, are intended to code the body as another piece of equipment in the collective apparatus. Alternately, British roboticist Kevin Warwick has been conducting related experiments in which he has inserted microchips beneath his skin to communicate more directly with his computer. However, Warwick’s approach is more focused on systems analysis—how can the computer connection reveal ways in which we are already wired? Warwick means to record and isolate human nerve transmissions so that one day they can be stored and transmitted between individuals. His work is still far from isolating complex patterns in nerve transmissions, but when Warwick ventures to speculate on where his work will go, he presents a code that is far less revolutionary than Stelarc’s. While Stelarc invents and demonstrates new codes for human-machine interface, Warwick focuses on discovering and harnessing the codes that may already exist. My presentation will compare these two approaches and align them with the difference between systems analysis and systems design, the difference between the approach of science and the approach of art.
One of the central paradoxes of the immense popularity of Japanese manga worldwide is that although it has been inflected with codes and practices from both Asia and the west throughout its long history, yet it is also and at the same time, so utterly “Japanese.” Understanding what is Japanese about these objects that have become readable and desirable to so many other cultures can be sought through the dense and complex interplay of visual and textual codes found in contemporary manga. Through codes that indicate relationship, emotion and passion, a cultural tradition of reticence and reserve is seemingly usurped and yet preserved through the deferring signs that deliver meaning at the same time as they mask authentic expression. Or is this simply an additional form of Orientalism? Yet, it is this compelling practice that has created questions around the nature of desire that have emerged throughout the world and created in essence, a set of global codes that are undoubtedly not read for the same meaning, nevertheless they have created a worldwide market. This paper will attempt to designate and map the various levels of coding that have condensed in this narrative form and suggest how desire may have developed a decoder ring for a global readership.
This paper traces the implications of a paradoxical interaction among objects (art and everyday objects), new media such as the internet (the site of coded objects), and critical theory. The paradox stems from the contrast between current critical theories that treat mediation through notions of dematerialization and evanescence, and the interest in materiality that characterizes 21st century art forms. While digital technologies and dispersed networks of meaning seem to enable the replacement of material objects with simulations or abstractions, new cultural practices and artistic interventions insist on a material vocabulary that becomes more urgent in the face of current environmental and global concerns. I analyze a number of objects whose trajectory or presence in culture has been radically affected by the internet, as well as objects whose design transforms the internet itself. I use art pieces by Tobias Rehberger that activate an object’s potential for material expression through the use of internet connectivity; handcrafted objects by folk artists and crafters who increasingly use the internet to reach new audiences; antique objects long outside cultural circulation that have become newly relevant through online sites such as eBay and alibris; coded objects in Java and HTML that seek new levels of mediated presence; and new kinds of objects, such as zipcars, whose design inspires new approaches to circulation and ownership. Through an interdisciplinary methodology and combination of art-historical and cultural treatments of objects, I aim to elucidate the complexity of our relationships with art, technology, materiality and mediation.
In Fritz Lang’s 1926 film, Metropolis, the demon-cyborg is a deus ex machina, a motive force for agency in the direction of the plot. Invoking a demon through occult programming, Rotwang solves the problem of vitality for his robot, and only with the help of a demon can Lang solve the problem of reifying the city’s problems in Maria’s image, a dialectic synecdoche. Lang’s demon directs the agency of the film’s characters. Entropy is a mathematical analogy applicable to thermodynamic processes, and the cyborg icon demonstrates the local and global effects of the entropy occurring at the level of the story’s scenarios. The Maria-cyborg döpplegänger is portrayed as a hysterical female, chaos personified whose eventual destruction mirrors the implosion of meaning in the structurally informatic matrix of the city, where her thermodynamic entropy within this closed system leads to the destruction of the machines by the worker “Hands.”
“Birdsong must be among the most captivating and complex sounds a human ear encounters. It is also the must elusive to describe. Trying to do so stretches both our linguistic and visual descriptive systems, and poses a very unique translation problem.” So begins Nina Katchadourian’s catalogue description of her performance-installation piece entitled “Please, Please, Pleased to Meet’Cha” (2006), a work which playfully engages one of the slipperiest slopes in Animal Studies: that between transcription and translation, recording and projecting, “speaking as” and “speaking for.” Reversing the anthropomorphic logic of most traditional animal representation, the work challenged its human performers to creatively attempt a theriomorphic expression that would employ and then surpass familiar modes of interspecies imitation. Beginning with a variety of human codes for representing birdsong (including mnemonic, phonetic, diagrammatic, and poetic codes) the artist used the protocols of “site-specificity” (the birds chosen were native to trees in the grounds of the performance venue, Wave Hill, a New York cultural center)—and, much more unusually, skill-specificity (the human performers were all United Nations translators)—to create an interspecies performance that materialized language as landscape, and characterized the species divide as a space of limitless creativity. This paper will analyze “Please, Please, Pleased to Meet’cha” in relation to Eddo Stern’s video-game-based performance event “Cockfight Arena” (2001), where interspecies imitation took the form of embodiment and screen avatars, dematerializing the space of human-animal interaction and producing a much more disquieting account of the species divide.
Only male whales sing, so biologists would like us to believe it’s a kind of sexual display. Problem is, female whales show no interest in the song! Only other males pay attention, in a noncompetitive way. Plus, the song is constantly changing, and every whale is singing the same new song as it goes. On opposite sides of the same ocean, the change happens in a similar way. How can this be if the whales are too far apart to hear each other change? No one really knows. By playing along with whales and trying to crack their code by considering it as music, I hope to find out.
The presentation could also be done as a musical performance on one evening of the conference, making use of the sounds of birds and insects, in addition to whales, showing how an interactive approach leads to a valuable understanding of the codes at work with these animals.
Thomas Browne, the seventeenth-century physiognomist, held in his library many works by the Renaissance natural philosopher and father of cryptography Giambattista Della Porta. In 1586 Giambattista Della Porta, best known for his work Natural Magick, published a now-rare text entitled De humana physiognomia which used woodcuts to illustrate resemblances between animals and human characteristics. While some scholars, and especially historians of science, have recognized the influence of Della Porta on the eighteenth-century Swiss physiognomist Johann Lavater, fewer have explored his texts in relation to Thomas Browne’s representation of animals. This paper will examine significant relationships between Della Porta’s doctrine of signatures and Browne’s argument in Pseudodoxia Epidemica, or Vulgar Errors (1646) that he saw no organic reason why certain “quadrupedes…might not be taught to speak, or become imitators of speech.” Awareness of this esoteric tradition complicates recent animal studies scholarship in the humanities which has been dominated by a Christian paradigm of “difference.”
Geometric diagrams are scattered throughout alchemical imagery, appearing first in the late fourteenth-century and lasting well into the seventeenth within the corpus of alchemical engravings produced by the publishing houses of Lucas Jennis and the De Bry family. This paper will illuminate the alchemical interpretations of numbers, geometric forms and diagrams that provide the underlying foundation for alchemy’s dualistic masculine and feminine figural symbols. Circles, triangles and squares are simple forms, yet they carry alchemical references to the unity of matter; to its three-fold composition from Sulphur, Mercury and Salt; and to the four elements of Earth, Water, Air and Fire. The seven ancient planets, including the Sun and Moon, Mercury, Venus, Mars, Jupiter and Saturn, and their oversight over the seven metals, are also represented in diagrammatic form. Comparisons will be made between the medieval cosmological diagrams of theology and astrology and some of the earliest alchemical diagrams in the manuscripts of Constantine of Pisa. As alchemical imagery developed, both simple and more complex diagrams evolved within the early printed texts of Basil Valentine’s “twelve keys” published in the Tripus Aureus, 1618; Michael Maier’s Atalanta fugiens (1617); and Robert Fludd’s Utriusque cosmi maioris (1617). Further adaptations of alchemical iconography within Rosicrucian diagrams will also be addressed.
The shadowy and elusive Marcel Duchamp, one of the most influential artists of the twentieth century, was a master of self-invention who carefully regulated the image he projected via self-portraiture and through his collaborations with those who portrayed him; recasting accepted modes for assembling and describing identity, indelibly altering the terrain of portraiture.
This paper focuses on two groups of images created around 1919 - 1924. One is the regularly addressed collection of images of Rose SÈlavy/Belle Haleine/Rrose SÈlavy, and their connectedness to Duchamp’s construction of his female alter ego. The other is the less often considered, and not comprehensively discussed, group done during the same time; showing Duchamp with various haircuts—beginning with the (mislabled) Shaved Head photograph of 1919, to those chronicling the haircut given by deZayas in 1921, to the Wanted Poster of 1923, to the 1924 image for Monte Carlo Bond.
Studied together these photographs suggest that the shadowy figure of the bachelor is operating behind the veil of the bride. The various haircuts offer the potential to be read as Duchamp’s effort to posture himself as the CELibate. Rose, et.al., operate as has been discussed on a number of occasions by various authors, assume the role of the MARiÈe. My effort is to draw the two groups of photographs into one one discussion, illustrating the presence of the shadowy and veiled bachelor operating between that of the bride, fused into a portrayal the androgyne via the masquerade of gender exchange.
Henri Focillon is best known as the author of Life of Forms. This work, with its Bergsonist and organicist formalism, too often pigeonholes or obscures the earlier work of this writer, which delved very deeply into notions from mysticism and the occult. Tropes from these two fields appear throughout Focillon’s earlier work, especially his work from 1930, Maîtres de l’estampe. Most often, Focillon uses such tropes to describe prints, which are, like hallucinations, reproduced images of absent objects. My paper will outline the main instances of Focillon’s use of terminology from mysticism and the occult, connecting these usages by Focillon to the broader period literature on hallucination, in order to establish a counter tradition of formalism in Focillon’s writings. Focillon’s work grew and mutated like his theory of forms itself. By focusing on texts that precede Life of Forms, I will reveal that his aesthetics of form has roots in theories of visualization that grew up in the midst of séances and trances, in addition to the better known realm of the artist’s studio.
In 1850, Mary Anne Atwood famously published and then immediately retracted her occult classic A Suggestive Inquiry into the Hermetic Mystery. Fearing she had revealed too much about the secrets of alchemy, Atwood and her eccentric country gentleman father withdrew the volume from circulation, burning as many copies as they could retrieve in a bonfire on their lawn. The secret that she had let loose was that alchemy was about the spiritual self-transmutation of the alchemist through a process of mesmerism, not about elemental transmutation. The chemical language served as a code for spiritual processes. On the other side of the Atlantic, in 1857, Major General Ethan Allen Hitchcock, grandson of the patriot Ethan Allen, once commandant at West Point, and a veteran of the Mexican War, published Remarks Upon Alchemy and the Alchemists, making similar claims about the nature and codes of alchemical texts. These works initiated the so-called “spiritual alchemy” tradition in the West that persists to this day. This paper will contextualize spiritual alchemy in 19th century sciences of the mind, especially that of mesmerism. In particular, it will focus on the material understandings of mind and soul engaged and transformed by spiritual alchemy.
The introduction of phrases such as “I googled it” and emoticon lingo such as “lol” are reshaping the landscape of the English language. This shift has been both praised for its advancement of linguistic expressiveness and denounced for its flat out disregard to formal structure. While these debates linger, discussion on the visual vernacular developed by new media technologies does not seem to evoke as much passionate discourse. The cyclical nature of visual culture and its ready consumption through television and editorial mediums may make it less clear that a boundary has been crossed. However each new media, and the experience surrounding it, introduces a new set of visual language cues [codes]. This system is often picked up and appropriated in manners beyond its original context, yet it carries with it the residue of its origin. How is it derived and where is it situated within our visual culture? What connotations does it bring with it? This paper will explore the concept of a visual language system generated by new media technologies, and its ensuing experiences, addressing what they afford us as a means of communication.
When frustrated by a lack of verbal adeptness, we often reach for pen and paper to draw a picture, trying to bridge a gap in our words. When attempting to communicate with others, we resort to visual representation not because we claim to be artists, but because we have something to communicate and our words have failed us. In this context, we use images to carry meaning across a divide, and to convey our thoughts with clarity and precision when we cannot find the right words. This stands in sharp contrast to how images are often used in art, where there can be a high tolerance (even a desire) for ambiguity and multiple interpretations.
Visual coding can provide a shared vocabulary across radically divergent communication systems. Some systems, like art, are marked by a high tolerance for ambiguity while others, like statistics, strive for specificity and accuracy. In collaborative situations, a visual format is often chosen for a specific reason: visual information has the capacity to provide a shared vocabulary across disciplines and to uncover novel relationships that would otherwise be hidden.
Drawing on research in information science, cognitive psychology, computational linguistics, art and design, this paper will explore the nature of information exchanged through visualizations, focusing on the phenomenon of visual codes that enable collaborations in multi-disciplinary environments.
Visuo-linguistic liminal spaces result from the tensions that arise between language and perception. Though it is understood that language is a cognitive mediator of perception, there are suspect creatures in every day life that reveal a reversed hierarchy where perception, specifically visual scaling, mediates language.
I explore the linguistic and perceptual interplay between shade and shadow using English as the frame of reference as other languages such as Spanish differentiate them through context, rather than through terminology. Whereas most people can identify the “shade” and a “shadow”, few can articulate the difference if asked. “Shade” and “shadow” are different though there is a place where the two concepts overlap perceptually. As a linguistic intervention, I have identified and named this overlap the “janus shade”
The act of identifying and naming this overlap is a poetic tool for revealing how the particular cognitive behavior of visual scaling against the environment affects the linguistic process of naming what is seen. Identifying the gaps in language, going there and asking how these gaps relate to consciousness, can not only reveal a new understanding of cognitive processing of language, but also new spaces for the poe(trees) of thought.
Dys-embodiment describes a condition of un-reality brought about by experiences that point to the involvement of our bodies in the constitution of the world. What evokes this sense of unreality is not the sudden awareness of our bodies, but the awareness of the unperceptability of their involvement, like what is described by Massumi (2002) when he realises that during his way up through the office building in which he works, his proprioceptive sense of direction gets disconnected and reconnected in a different way to his visual imagination, as a result of which, when he looks out of his window “my north was everyone else’s east”.
Massumi’s observations point in the direction of what Rob van Kranenburg and I have termed corporeal literacy. With corporeal literacy we argue for an expansion of the notion of literacy, not in the last place to question the notion of literacy itself, as well as the way in which it is part of a constellation of other concepts like Cartesian subjectivity, mind-body opposition, print culture and western modernity. Just as visual literacy not only involves a change in the object of the reading, but also what is involved in reading and what it means to be literate, so does corporeal literacy not simply mean the transposition of language related concept to the realm of the body, but rather a rethinking of the notion of literacy from a position beyond oppositions like language and the body, concrete and abstract, conscious and unconscious.
The title of this paper is borrowed from economics—and especially the economics, beginning with the double-entry accounting invented at the dawn of the Enlightenment and, indeed, the dawn of modern science—that focuses upon the reality of intangible assets. “Intangible Materialism” pursues a definition of materialism that allows for its understanding in relation to the physical facts of physiology, the biological processes of evolutionary adaptation, and the semiotics of meaningful apprehension. Specifically, the paper examines the phenomenon of human pain in relation to its physiology, its adaptiveness, and the manner in which pain is apprehended as meaning as well as sensation. Working with recent studies of the physiology—and the neuron-physiology—of pain by Patrick Wall, Ronald Melzack, V. S. Ramachandran, Antonio Damasio, the paper explores the phenomenon of pain. Of particular interest is the phenomenon of “phantom pain”—pain experienced in lost limbs, for instance—that suggests particular neurological mechanisms that “define” pain. This is examined in relation to what Ariel Glucklich has called the “sacred pain” that is part and parcel of religious experience and what David Morris has described as “the culture of pain” in our “postmodern age.” One way to understand the phenomenon of pain is by means of the semiotics of Charles Sanders Peirce and his presentation of “iconic” signs as ones that emphasize the sensate nature of meaning, and “Intangible Materialism” argues for an understanding of a semiotics of sensation as well as meaning.
Biotechnology, genomics, and the use of pharmaceuticals to treat mental and emotional distress, three examples of contemporary science which significantly impact notions of the subject at the beginning of the 21st century, are brought together by current research in psychopharmacogenomics: the attempt to link recently acquired knowledge in psychiatric and behavioral genetics with psychopharmaceuticals. The purported benefits of such an advance are twofold: (1) the ability to measure individual variation in drug response, which would allow a more ‘personalized medicine’ while contributing to the drugs’ safety and efficacy; and (2) the enhanced ability for ‘risk assessment’ through diagnostic tests which locate the presence of disease-susceptible genes, thus creating potential new marketing strategies to targeted consumer groups.
After introducing the general conditions and precepts of this clinical research, I will address the potential impact of this fast-developing medical technology on cultural ideas of the subject; how is our understanding of sadness, depression, and anxiety affected by the knowledge that such emotions and moods are, in part, genetically encoded?
Such explorations of a ‘biomedically-mediated’ subjectivity will take place against a backdrop of contemporary art, specifically photography and video. While I will address artistic ‘content,’ primary attention will be given to a discussion of form, arguing that the techniques and production methods of this art parallels theories of ‘coding’ that currently inform medical research. My paper will thus conclude with the assertion that, rather than ‘abandoning the subject’ (as is often claimed), contemporary art structurally reflects it through formal processes of coding.
This panel looks at the modalities of code as a media form. In relation to new media arts and visual arts, traditionally code has performed as the architecture in relation to a functional or actual output. The papers on this panel address the question of what are some of the significant changes theoretically and in the production of art and cultural works when code is engaged as representational media form. A discussion of contemporary reworking of information and aesthetic theory is central to the panel. The panel is composed of media theorist and media practitioners (code writers and artists using code), which brings diverse and highly engaged perspectives to the subject. The issues discussed in the various papers include generative aesthetics, networked art works and network culture, and the history of aesthetically oriented code.
My paper addresses concept and design of the OpenStudio project and other commons-based generative Web works.
In this paper I look at the emergence of 3D and 2D avatar platforms that move beyond traditional video-game parameters. The subject is the potential transition (or the further augmentation) from text-based to media-rich Internet, particularly the advent of 3D graphic multi-user worlds. I address the relationship between the designers of these worlds—the first-level creators of code and script that enable the procedural aspects of the world—and the user experience. I argue that the design and use of virtual worlds has grown increasingly symbiotic with the generation of a code-based user-created-content. I look at Linden Lab’s Second Life, the Austrian platform Avaloop, and other examples of this emergent form.
In its brief history, machinima has shown some of the symptoms of a split personality. Its diverse origins can be found in practices and technology associated with an array of activities: hacking, replay, skills demonstration, or even just taking screenshots. It is reasonable to cut down this complexity by breaking machinima production down into two fundamental modes: demo and screen capture. Recent production practices and especially post-production have eroded some aspects of this division. However, it is worth asking now if perhaps it has been replaced by a new one: code-based vs. object-based machinima. If so, what does this new division mean and is it important?
I will argue that, yes, it is. Thinking in terms of code-based vs. object-based machinima is interesting and provides important clues about motives, communities, cultures, and legalities of machinima production and comsumption. The paper will conclude by asking whether the distinction between code and objects is destined to disappear or thrive, with reference especially to recent work in World of Warcraft and Second Life.
At least since his first novel, Three Farmers on Their Way to a Dance, Richard Powers has concerned himself with what has traditionally been figured as the problem of representation (and, as corollary, he addresses the adequacy and efficacy of scientific models in The Gold-Bug Variations). In Galatea 2.2, he turns more directly to the question of the emergence of cognition (and the cognition of emergence). While purportedly about using words to produce worlds, Galatea 2.2 is also a patiently detailed unfolding of the dense networks necessary to produce understanding. It is a DIY on the concrescence of sapience. In his recent discussion of Alfred North Whitehead, Jim Bono has argued that, “‘representations’ are themselves practices that allow ‘things’ and/or agencies to emerge from their entangled networks to become a part of emergent scientific practices and knowledges.” This paper reads Powers through the lens of Whitehead’s notions of concrescence and processual unfolding in an effort to understand the practices of do-it-yourself intelligence.
Bruno Latour has proposed that Richard Powers’ latest, award-winning novel, The Echo Maker (2006), may be regarded as a significant work of science studies: “It is actually more advanced than science studies, because it allows a freedom of movement in the description of entities and words, which you never get in the very poor vocabulary of the social sciences where you have ‘agent’ and ‘collective’ and ten words maybe to describe the world.” This talk examines the various levels at which Powers’ novel functions as just such an exemplar, how it investigates contemporary ways of knowing as well as the extent to which scientific claims for knowledge do or do not accommodate alternate forms of knowledge. In particular I will attend (as Powers does) to the Jamesian contrast between knowledge-about and knowledge of acquaintance and the interrelations between these. The phrase “a total cipher,” for instance, is used by one character (a neuroscientist and science popularizer) to describe another, whom he also characterizes as “unreadable”; yet at the same time this woman, *about* whom virtually nothing is known by him or any of the other characters, is experienced—again, by everyone—as being “completely with you when she talks to you. More present than any person I’ve ever met.” Or as Weber (the neuroscientist) says late in the novel, “I feel I’ve *known* you my whole life.” There’s nothing unusual about this contrast, of course; what is unusual is the extent to which Powers makes it the explicit center of his novel, as he sets forth a compelling argument for the relative ease of developing “a comprehensive [neurological] theory of self” by contrast with the truly hard problem: “knowing what it meant to be another.” How does one decipher a total cipher? What Powers suggests in The Echo Maker is that the proper procedure is not decoding but more algebraic in nature, a matter of continuous completion figured as a form of echo-making.
Richard Powers’ description of Galatea 2.2 as an extended meditation on Emily Dickinson’s “The brain is wider than the sky” conveniently aligns his interest in how the mind works and the representation of that working with Gerald Edelman’s similarly twinned preoccupation, epitomized in his Wider Than the Sky. Both have throughout their careers devoted themselves, with ministerial purpose, to making the invisible visible, decoding/translating for lay readers what had once been confined to a sacred source, the space of knowing, into the grammar of perception. Calling readers to recognize their election to knowing through, in fact, the “sound” of words on their pages, Powers and Edelman illustrate an essential feature of successful decoding/translation—identification with/of familiar and/or repeated elements. William James reminds us in his Principles that even in reading or remembering silently we first “hear” the sounds of words which serve as stimuli activating wave packets carrying a range of possible meanings out of which an appropriate selection for a present context is made. The range of possible meanings for lay readers can only be drawn from common, ordinary, language not from a specialized, technical one. As Powers puts it, “The heart of the code must lie hidden in its grammar, not what a particular string of DNA says, but how it says it; a language sufficiently complex and flexible to speak into existence the inconceivable commodity of self-speaking” (GBV 77). In the reciprocal relation of effecting the translation of Powers’ texts in their own imaginations, readers themselves become, as it were, macro versions of transfer/messenger RNA, recognizing/identifying familiar elements set in new relations, and in the translation incorporating the new sequences into their own perceptual systems. This paper will consider the extended structure of Powers’ work as an animated template repeating and recursively varying throughout its unfolding the process Edelman describes as the Theory of Neuronal Group Selection (TNGS), tracing the activity of mind ever spiralling out into new territory.
In this essay, I argue that A. R. Ammons offers a new model of an organic poetics in his long poems by using their rigid “print-out” form to imitate the constraints that the lattice of the genetic code places upon the flow of consciousness. In “Essay on Poetics,” Ammons writes that he is drawn to “the transcendental vegetative analogy” of organic form, the autopoietic notion that “a poem in becoming generates the laws of its / own becoming.” But he takes this notion of autopoiesis to an even more fundamental mechanism of the body—the genetic code: “but actually, a tree / is a print-out: the tree becomes exactly what the locked genetic // code has pre-ordained.” However, the seemingly rigid “code” behind Ammons’s form, like the genetic code, becomes “strictures that release [him] into motion,” allowing for the emergence of a poem or of an embodied consciousness. Yet for Ammons, as for cognitive theorists like Gerald Edelman and Francisco Varela, autopoiesis is not a closed system. The code allows for adjustments to environmental changes, a process Varela calls “enaction.” Ammons explores this concept not only in the content of his poems, but also in their form, as in the subtle seasonal line shifts in “Hibernaculum.” Critics such as Cary Wolfe have argued that Ammons critiques Romantic models of organicism and nature by embracing cybernetic models. At the same time, I argue, Ammons doesn’t fall into the cybernetic trap of erasing the body. The new organicism of his long poems enacts the autopoietic emergence of an embodied consciousness, an emergence that is released into motion within the structures of the genetic code, the body, and the environment.
In 1972, geneticist Susumu Ohno first used the term “junk DNA” to refer to the majority of our DNA which has no discernible biological function. Studies indicate that as much as 98% of the human genome is “junk,” meaning that a vast portion of our DNA mysteriously persists in our genetic make-up despite contributing nothing of import to its host. While the importance of junk DNA or lack thereof is currently in debate, a consequence of junk seems to indicate that a prevalent amount of noise or dirtiness is necessary for a genetic code to properly work. But is this true of all systems of code?
The purpose of my paper is to examine the logic of junk and its relation to the meaningful and productive parts of code from which it diverges. This relation, between noncoding and coding, is apparent not only in molecular biology, but also plays out in the field of contemporary poetry. I intend to look at poets who are sensitive to this dyad and who understand the importance of non-productive junk to the successful transmission of productive codes. The poets I will examine include John Cage, himself a noted theorist of noise, and Canadian poet Steve McCaffery. Both these writers had their ears tuned to the non-productive aspects of codes in order to critique the quest to what must always remain a mirage: a clean code unsullied by the looming spectre of noncoding junk.
Our panel investigates the theoretical potential of Norbert Wiener’s concept of the “operative image” as a linking structure in philosophy, material science, and art. Wiener proposes this puzzling category of image in his ‘God and Golem, Inc’ (1964) in relationship to his broad concerns with machine learning, machine reproduction and the place of machines in society. His meditations in ‘Golem’ have religious and ethical sweep and hint at a number of technologically mediated roles for the image in thought and world-making.
Some implications that the panel explores are:
• The impact on visual studies of the arcane tradition of mechanical pictures and an image-type that exceeds pictorialism.
• How languages have propagated in history according to their own array of image-operations (in the Wienerian sense).
• The trajectory for a philosophy of the “operative” word-image, according to a three-part name/noun taxonomy.
• The use of Wiener’s notion of operative image as a basis from which to analyze Aristotle’s visions of human and artificial servants in his Politics.
The use of machines to produce and reproduce both pictures and language is a familiar feature of modern society with a long history in print, photography and recording technologies. What is less familiar, even alien, are pictures that are themselves machines having mechanical functions at the root of their existence. Arcane examples of pictorial automata lie at the fringes of art history with cleverly constructed moving parts that compose animated tableaux. This tradition of concealed gears and springs lying behind pictures leads up to the abstract “operative images” of cybernetics that, while still marginal to the art world, perform as extensive backdrops to the information society. Now as printed circuits and microchips, these images take on the intricate, dynamic forms of woven textiles and multileveled, functional diagrams. All around us, these frameless and double-sided images behave according to the productive logic of information processing and merge with a more general mechanization of symbol-processing and language-use. At this level, they intensify the temporal disorder at the root of our modern inability to integrate time into lived experience.
My visually illustrated presentation analyzes this problematic convergence of the operative image and disordered time and draws implications for the field of visual studies. I conclude by speculating on the continued relevance of visual studies to computer culture on the basis of operative image-language machines.
In tracing the German cultural history of the rebus, Friedrich Kittler focuses on the unconscious semantics of letter forms, and, in invoking Freud’s notion of language unconscious, he suggests the operative properties of languages as visual signs. To expand on Kittler, one might say that the history of languages and linguistic propagation include a host of semantically-rich figures and formations in the descent of languages from a common ancestor (i.e. Sanskrit). As well, Semitic languages have elaborate systems for denoting vowel sounds amidst only consonants (i.e. Sewa-mobile, Dagesh, thereby doubling as gestural signs), and in relatively recent grammatical analysis, they are also understood to have image-like concreteness in the origins of their alphabets. I would like to articulate a few examples of how languages—especially in their evolution over thousands of years—have their own array of image-operations (in the Wienerian sense) that are created out of elaborate and often concrete histories as diverse as the historical agents who generatively learned them, inscribed them, and appropriated them.
This paper examines Norbert Weiner’s concept of the “operative image” in light of his broader work on information theory, cybernetics, and artificial life while at the same time casting a retrospective eye toward a genealogy of its component terms. Taking as its point of departure Plato’s Cratylus, the paper sketches the trajectory for a philosophy of an “operative” word-image, according to a three-part taxonomy: the name/noun as visual image or diagram, the name / noun as gesture or mime, and the name / noun as tool or technology. The paper then briefly considers 16th and 17th century philosophies of language, especially the problem of the poetic “image” and of mimesis in occult writers such as Agrippa and Ficino, in Shakespeare, and in Bacon, arguing that we find in these writers a concept and more importantly a use of the word-image that is partly emblematic, partly iconic or referential, partly logical or discursive, and partly “operative” or technological. The paper offers these examples as precedents for a model of “operativity” or “operationality” that is finally only partially linguistic, comparing them to Deleuze and Guattari’s notion of “pragmatic linguistics” and the “order-word,” on the one hand, and to Latour’s notions of “translation” and “inscription,” on the other. The paper concludes by suggesting that the very notion of an “operative image” requires a movement beyond conventional philosophical thought (and especially philosophical thought on language and mimesis) to the domain of affect, substance, force, and action, glimpses of which we find in Weiner’s work on cybernetics and artificial life and for which the paper offers a new name: “dramatology.”
I use Wiener’s notion of operative image as a basis from which to analyze Aristotle’s visions of human and artificial servants in his Politics. Aristotle sees slaves primarily in teleological and pragmatic terms: they are animate tools that the master uses to achieve an end. The particular end of the servant is action, as opposed to production—for unlike a loom, a servant is not a tool whose result is a material good, such as cloth. Instead, for the householder or the craftsman, a servant is a type of tool whose chief function is to use other tools, which then produce material results. In essence, Aristotle thinks of human slaves as part of a network of tools that allow the master to “live well” or to conduct his business effectively. It is clear that, because of his teleological focus, in which a slave is merely “a tool prior to other tools,” or a tool of higher position in the hierarchy of instruments, function is paramount (1253b33). The ability of a human servant to take orders and translate them into action is at the heart of a slave’s purpose. Therefore, the humanness of the servant’s form is important only as it contributes to function: human hands can move the shuttle on a loom; human understanding (as opposed to rational thought, which Aristotle contends is absent in slaves) allows orders to be followed. In terms of the relationship between the master and the slave, the foregoing reinforces Aristotle’s view of the slave as a “possession” of the master (1254a-b), as a “tool for living,” and it suggests that servants’ bodily forms are unimportant to Aristotle except for the functions they may provide the master’s body, functions that this philosopher would gladly see transferred to non-human, artificial forms, if possible.
Using some of the latest experimental and theoretical findings in neurology, this paper will examine the concepts of information, code, and communication as applicable to the “communication” between neurons. The argument of the paper is that the mathematical concept of information (as encoded in digital bits), developed in and in the wake of Claude Shannon’s work and often adopted by mathematical biologists, is applicably, at least not straightforwardly applicable, to the information transmission and to the very nature of neural information. Instead, I shall argue first, that the character of neural information, codes, and communication (to the degree that we can apply such concepts) are indeed more akin to that of linguistic communication. Assuming that such is the case, however, there arise the questions of the concepts of linguistic information, coding, and communication, and of the limits of any such concepts, the questions that have preoccupied modern, say, post-Saussurian, linguistics, and philosophy for much longer, since its inception in Plato and pre-Socratics. Secondly, then, I shall argue that a certain concept of language extending Derrida’s concept or, in his terms, neither a term nor a concept, of language as writing, which might also be seen as correlative concept of translation, may be particularly suitable for both linguistic and neural information, coding, and communication, in part by expositing the irreducible limits for applicability of all these concepts to (in Derrida’s argument) the production of meaning and communication. Third, I shall reconsider the possibly use and limits of Shannon’s concept of information in this context, in part via certain concept of computational linguistics, in particular the so-called “ontological semantics.” Overall, my aim in this paper is to explore new conceptual, philosophical possibilities offered by the multidirectional traffic between contemporary neurobiology and poststructuralist theories of language.
John McDowell has argued that “we cannot make use of the notion of an interface between mind (which inhabits the space of concepts) and world, where the world presents the mind with non-conceptual items for it to work into conceptual shape.” In this paper, I review McDowell’s argument for the conceptual character of the content of perceptual experience in light of some objections raised against it and proceed to consider the implication of McDowell’s thesis for the philosophy of mind. If a ‘black box’ or a ‘brain in a vat’ with an interface to the world are misguided metaphors, what is the better conception? I argue that Hegel’s notion of mind as spirit offers just what is demanded.
This presentation concerns itself with meat/code feedback loops in critical art projects as a form of resistance against the dissolution of bodies into code. For more than half a century, the rhetoric of code has pervaded scientific research as the guiding trope for explaining the human condition. This search finds its apotheosis in the human genome project, which views human being as an elaborate game of Sudoku. What this view neglects is the messiness of human instantiation, the role of meat in consciousness, and the possibility that the finitude of the body is, in the end, the essence of being human.
As Katherine Hayles suggests in How We Became Posthuman, “the posthuman view privileges informational patterns over material instantiations, so that embodiment in a biological substrate is seen as an accident of history rather than an inevitability of life.” At SLSA 2007 I will respond to this important distinction by examining physical computing projects that attempt to put the body—in all of its messiness, suffering, and unreliability—back into information. “Dreadmill 2.0,” for example, involves a treadmill hardwired to a laptop so that the runner’s speed and heart rate determine the outcome of a nonlinear graphic novel experienced while running. “Geiger Cancer” uses a Geiger counter to transform the radioactive energy of cancer patients undergoing radiation treatments into a multimedia display designed for a cancer clinic waiting room. While such projects might be seen as examples of bodies-becoming-code, their ultimate end is to create a closed feedback loop that conspicuously underscores the finitude of the human body.
The introduction of animal genes into the human genome, and vice versa, constitutes a burgeoning and controversial area of research, some of whose results have been dramatized in fiction and film. A cluster of relatively recent works, including P. D. James’s The Children of Men (1992), Nancy Kress’s Maximum Light (1998) and Michael Crichton’s Next (2006), fictionally reflect on the possible consequences and ramifications these biotechnological experiments can bring about. In future worlds where fertility has dropped steeply, these biodystopias consider alternative scenarios where hybrid babies appear to be in some cases the only ones available.
Another aspect I wish to address, which is partially related to the concerns expressed above, is the commodification and consumption of animals by humans. This is being fictionally addressed, however, from the opposite perspective, that is, of the animals’ use and abuse of human beings, who in the texts I will engage with are treated as the inferior species, as objects to be exploited and not as subjects. I will thus briefly look at Will Self’s Great Apes (1997) and Michael Faber’s Under the Skin (2000) which, together with the narratives mentioned above, reflect on the future of human nature, the decentering of the human being from its anthropomorphically central perspective, as well as the precariousness of that position and the porosity of the genetically coded boundaries between the humans and the great apes. I will explore these scenarios with recourse to recent work on the genetic code and the genome, both from a scientific and a social point of view.
The broad category of hypothetically extant animals known as “cryptids” resists organization. In addition to mythic creatures like the Australian bunyip and the Tibetan Yeti, cryptids include species thought to be extinct though still ardently searched for, like the Tasmanian tiger and the North American ivory-billed woodpecker; and newly discovered, taxonomically uncertain fauna like the dwarf hominid designated “Flores Man” recently excavated in Indonesia. Despite the currency of the term predominantly among cryptozoologists, “cryptid” is also used in mainstream sciences to describe feral animals, animals outside their usual habitat, mutants, and hybrids.
This slippage of the term across fields of inquiry suggests the code-breaking characteristics of cryptids: cryptids defy taxonomies and disrupt evolutionary explanations, showing up everywhere. Whether monstrous, lost, found, gone wild, stray, or transformed, they elude human government of the natural world, reminding us of our limited power and knowledge. In this way, code-breaking cryptids also function as recognizable outposts of viability. Typified by an iconic association with place, cryptids are thus mappable, their expression in a variety of forms irresistible—though they remain, paradoxically, inscrutable.
Following Stephen Jay Gould’s advice that “we can best understand a natural object or category by probing to and beyond its limits of actual occurrence,” in this presentation I examine several instances of cryptid code-breaking in narrative and visual modes: science journalism’s speculative revisions of human evolution triggered by the Flores Man discovery; Tasmanian folklorist Col Bailey’s photographic presentation of the extinct thylacine; and artist Alexis Rockman’s refiguration of taxonomies to include all manner of cryptids.
In her Companion Species Manifesto, Donna Haraway uses “metaplasm” (defined as a change in a word by adding, omitting, inverting, or transposing its letters, syllables, and sounds) as a trope for relationality between species. According to Haraway, “metaplasm” suggests “the remodeling of flesh, remodeling the codes of life in the history of species.” “Metaplasm” entails the constitutive enactment of ontology and epistemology, materiality and intelligibility, substance and form. “Metaplasm” suggests that species don’t just have relationships; they are relationships. Working from Haraway’s verb-heavy practice of “metaplasm,” I propose that “animal prefixes”—for example octo-genesis, echino-epistemology, and cteno-theory—literally and imaginatively articulate kinds of relationality between various species and technologies. “Animal prefixes” describe the ways that non-human animals are “always already” present (as in commensalism) in language and representation. Vis-à-vis visual culture, “animal prefixes” suggest that animals are not merely “stand-ins” within their own representations. On the contrary, “animal prefixes” suggest that animals are constitutively bound, by metonym and synecdoche, as both objective subjects and subjective objects with their representation. For example, Jayne Hinds Bidaut’s photographs in Animalerie (2004) and Henry Horenstein’s photographs in Aquatics (2001) illustrate how animals exceed their photographs by devouring and inhabiting codes of representation. These represented organisms are not just in the photographs; they are literally of the photographs.
This paper is meant to highlight how art, and in the specific case of algorithmic art, can reproduce the experience of trauma. Trauma functions as a unique type of transformed consciousness whereby that which is known becomes unfamiliar. Creating an increased awareness of the structure of consciousness distances the viewer from habituated modes of perception and has the power to lead to greater autonomy.
With the use of algorithms, what seems to be non-sense is in fact perfect sense, as the artist’s hand is removed from the final outcome of the work due to preset parameters. These presets act as obstructions that defy predictable outcomes. While the input data changes, the rules never do. For interactive and networked artworks, the data source can be further randomized, allowing for even greater variability of the outcome. For sound and video works, the phase shifting creates a heightened awareness, as conditioned expectations are undone.
Calling attention to the workings of human perception by simulating and augmenting human information processing in a computer algorithm exposes and subverts the ways that subjectivity is constructed. The binary nature of the computer is an apt metaphor for the looping of data. In video art, loops alternate uncannily between heimlich and unheimlich, as data flip-flops between desire and repulsion, simultaneously creating a traumatic vortex and harmonic resonance chamber. In this way, algorithms and loops contribute to common data becoming unknown as it is manipulated, forcing the question between what is real and what is believed.
A work of art represented as a two-dimensional image in digital form is the starting point for this work. A color digital image can be stored in a variety of formats such as RGB or chrominance, luminance, etc., with a data set for each of these values representing the image’s pixel information for each quantity. Taking these individual data sets, this work looks into the question of what happens when the image data is taken as representing something entirely different from the art image itself, in this case a quantum mechanical system that will evolve in time according to the laws of quantum mechanics. The result is an image which changes completely in time throughout the frame space. While the original image is no longer recognizable in these time-propagated instants of the image, certain characteristics of the image appear to remain. These characteristics are related to the quantum mechanical conservation laws for physical systems. The relationships between color and space in both the usual image and the quantum mechanical image are explored in this paper, with comments on the role of the observer in decoding what the image represents. The paper will include visual imagery of the image-propagation process for particular works of art, a description of the basic quantum mechanical principles involved, and the encoding process for the mapping.
Maps are metaphors. Through metaphors we connect what we experience to what we remember. We create knowledge by connecting the new (the present) to what we know (the past) and so maybe predict what happens next (the future).
Our desire to predict fuels our desire to live, to survive. Desire is the foundation of narrative. Narrative reduces to desire, action and result—the structure of story. We exist in endless loops of desire—layer upon layer of stories of varying temporalities and shifting priorities—all synchronized to rhythms of breath and heart.
I make maps. I start with raw code—simple numeric models. As all is number in the computer I can map the numbers to the senses—turn numbers into tangible experience?
The maps loop in time and in the moment. There is synchrony in the sensory vertical and the temporal horizontal. Image and audio derive from the same numeric source. Each maps the other in the moment and through time. It’s visual music in a synaesthetic counterpoint.
Musical narrative developed over centuries, moving the listener through time with the Pythagorean struggle of harmonic conflict, dissonance seeking consonance. My little loops engage that struggle at various levels. Color shifts. Composition flows. Image and sound agree, complement, disagree and resolve.
Perhaps it’s abstract expressionism, true to its digital materials, founded in musical traditions and Modernist formalism. But it’s loosened a bit. It’s meant to be fun (God forbid). It’s jazz in color, shape, sound and computation. Relax. Hear the colors. Listen with your eyes.
From the automobiles that convey visitors to Alaska’s Denali National Park and Preserve to the advertisements and adventure narratives that first enticed them to the “Last Frontier,” nearly every aspect of the Alaskan wilderness experience is mediated by technology. Those who come to Alaska in search of wilderness find machines, publications, and products all promising to deliver wilderness—or at least a reasonable facsimile.
The goal of this essay is not to argue for or against the reality or authenticity of wilderness. It is, instead, to examine how technologies of mediation shape and perpetuate American attitudes toward wilderness. For instance, the widespread circulation of nature and wildlife photography have long served dual purposes for wilderness: they kindle popular fascination with the natural world by making wilderness images readily accessible at the same time as they obscure the material condition of wild landscapes by replacing them with technologically-simulated images. By influencing public attitudes about wilderness and wildlife, such technologies can influence public land-use policy decisions.
In tracing the impact of technology on the American relationship with wilderness, this paper focuses on the visual pleasure derived from wilderness images and the technologies that facilitate it. Drawing on the work of Jean Baudrillard, Bradford Washburn, and Laura Mulvey, this paper will examine wilderness photography and videography as technologies that shape our relationship with Denali and other wilderness areas.
Photographs and maps reflect reality. As mirrors of the past, each can be used as a nostalgic reminder or an informational tool. While the art of creating photographs and maps has evolved, the craft of creating maps as objects of desire in today’s world seems antiquated. Although the mode of production is outdated, the genuine antique map is precious. Pre-Renaissance manuscript maps portray the globe in a more revealing fashion than maps born from present day Geographic Information System (GIS) technology. While today’s mapping structures may offer a refinement in Cartesian precision, 15th Century’s artisans offered more than mere geometry. They portrayed factual information to the viewer through a vehicle fueled by personal experience. Art provided context in the traditional map, where science could only yield data.
My last solo show, “Terra Incognita,” allowed the viewer to be suspended between text and context. Faced with mapping systems ranging from an actual Mercator Map to current technologies, viewers invested their own memories into the works. Two works in this show were the seeds that grew into the series I have been involved with throughout the past year. Both employed maps of Paris from an earlier era of romance and wonder. These maps were strategically cut to reveal specific compositions. The Paris works matured into a series of specialized compositions. Each piece draws from a set of characters found in the rich history of European pictorial traditions. One example is the symbol of fidelity found in many European paintings. This dog, fidelity, has given us the popular canine moniker Fido over the years. Fidelity and figures of heroic import all become part of single frame narratives. Individual readings of the work are both universal in recognition yet personal in association.
In the end, the quiet subtlety of the map as a photographic reproduction is the most intriguing element. A digital re-approximation of an abstract geometric recreation of a town redefined by time is just the starting point. These laminated levels of reality-distortion set the stage; characters equally redefined (by time or the viewer) as versions of truth enter into the story. History, fantasy and fiction commingle to communicate a range of ideas from skepticism to hope: from frustration to validation.
Art and science can work together to construct human flourishing. We recognize the height of Greek culture that celebrated empiricism and the birth of democracy. The Italian Renaissance is acknowledged as the rebirth of that intelligent discipline. We celebrate the conciliation of art and science and the ability to solve problems and construct a better world. Phidias and Leonardo, empirically measuring proportion and discovering anatomy’s meaning, symbolize artists working towards seeing the truth. These prior cultural achievements are signified by a transition from theism to recognition of the real “daemon” powers that we must contend with—nature and life, Dionysos and Eros.
In our time, the problem isn’t anatomy/physiology, it is ecology/equality—not concerns of the city- or nation-state, but global family. Now, concerns for global justice require a reformation/renaissance. I define 1945 as the turning point, with Oppenheimer’s famous quotation, “I am become death,” because we began to see, and now see the lethal naivety of theistic and technological hubris—and the necessity of designing the new world picture. With the aid of modern science, particularly ecology and the Gaia Hypothesis, we see the world as an interpenetrating culture.
The birth of nature and death of narcissus, BNDN, is the universal code of environmental justice—and an evolving emergence of a supervening global culture. The code signifies two moral necessities: First, acknowledge obedience to nature, and secondly, transfigure our childish narcissism and wish for omnipotence/superiority. BNDN signifies that we need to decode the purposiveness of nature in order to design environmental justice—a tacit birthright to a fair share of nature.
The philosopher Anne, Viscountess Conway, had the good fortune to have as her family physician William Harvey, from whom she sought treatment for the chronic headaches that caused her intense suffering throughout her adult life. Lady Conway’s migraines and other physical ailments provided Harvey and her correspondents with opportunities to comment on the mind-body connection, to consider how changes in diet and exercise might best secure mental well-being, and to speculate on the composition of human blood. One correspondent, Henry More, advised Conway to eat “such kinde of meat as begetts the finest and coolest blood,” and to benefit from the curative powers a “returne into English Ayre” would have on her constitution. More’s lay diagnosis invokes a Galenic model in which venous blood, produced by the liver, nourishes the body: in Galen’s system, nutrition is acquired through venous blood while vitality is produced by arterial blood in the heart (which contained the stuff called “pneuma”). More revives another ancient medical theory by joining diet to a set of environmental factors summarized in the phrase “English air.” This paper takes up the influence of nutrition and other external variables that work on the human body through the medium of the blood, and contribute to the formation of the English national character. I look at Lady Conway’s case history and a range of other seventeenth-century texts that represent the blood and cultural identity within the discourses of early modern physiology.
For a short period during the late seventeenth century, phlebotomy, the ancient and venerable practice of incising veins to let blood, was for the first time in its 2000 year history subject to attacks claiming the practice to be both irredeemably cruel and utterly ineffective. Phlebotomists were nothing more than “bronchotomists” — literally, cutthroats; phlebotomy itself was compared to butchery, an effort akin to amputating an arm just to remove a thorn lodged in a finger. One writer claimed that phlebotomy was invented by the devil himself, intended to suck the lifeblood from God-fearing Englishmen. The virulence of such attacks against a practice so ingrained in the daily life of the populace inevitably elicited equally impassioned defense: detractors were pompous know-nothings, ignorant of the rudiments of physic, thinking to make a revolution in learning merely by dressing up old ideas in new and obscure terms. In part, and most obviously, the tracts that articulate these debates concern the legitimacy of the technique itself; they debate specifically whether or not phlebotomy, if practiced correctly, works as its proponents claimed as an effective prophylactic against and as a cure for a host of humoral conditions. But it will be my argument here that the specific tropings of the arguments, and most especially their differing formulations of the blood, “that sublimest juyce in our body,” enfold a more subtle – and ultimately a more fundamental – debate about the nature of human being and its relation to social and material environments conducted at the cusp of the modern world.
“An ounce of blood is worth a pound of bone”
—18c breeding axiom
Why blood? If that is the opening question of presentation, it is only to set in motion what I hope will be mapping of the circulation of “blood,” both as fluid and as trope, in the construction of a particular animal known today as “The Blood-Horse” ©. As the animal itself is constructed as a distinctly embodied metaphor for an imagined idealization of Englishness—an equine avatar of an imperial identity of cultural improvement and impossible purity—the various ways in which this particular kind of horse comes to be identified with (and through) blood prompt us to consider what it was about the status of blood in the early modern period that lent itself both to the utopian narrative of perfectibility that motivates 18c rhetoric of horse breeding, but also to the ironic undercutting of those narratives. The “hot blood” idealized by 17c theories of hybrid vigor and development give way to the fetishizing of “pure blood” doctrines that champion xenophobic exclusion. While “blood” in these senses operates both as material fluid and as a trope of inheritance and heritability, blood operates also in the more narrowly material sense of an instrument of 18c farriery that again follows closely its human counterpart: bloodletting as a surviving residual medical practice in the form of rowelling, but also in a range of practices (blistering and firing) believed to operate by virtue of the blood and the recently discovered circulatory system. The history and persistence of those treatments offers a kaleidoscopic collision of outdated belief and updated veterinary science. And these lead to one of the most deeply entrenched and intractable questions of modern veterinary practice: why does the blood-horse bleed? Is it in the blood? And if it is, what does one do?
The poetics of code is essayed in this presentation beginning with a critical reading of its title and a description of some of the processes involved in the composition of this title. The purpose is to make a partial examination of the relationship between code and language in a specialized register and specific context, and then to see if any of the principles revealed might be applicable to the theory and practice of writing (in) digital media.
In the Oulipo’s Atlas de littérature potentielle, Paul Braffort contributed the text of a computer program that generated aphorisms based on an algorithm devised by Marcel Bénabou. Braffort offered the program as “a perfectly complete analysis” (311) of how the algorithm operated. Reading the program does not explain much, however. It is written in APL (A Programming Language), which was developed for terse and efficient processing of matrices. A few APL interpreters exist today but Braffort’s program will not execute as published in the Atlas. We can examine the code of the program and the sample output Braffort provided to infer how the program works, but trying to read the program step by step is like reading a dead language for which there is no Rosetta Stone. If we rewrite the program in a current version of APL or another language (an exercise in reverse engineering), we attempt a translation without understanding the code in the original idiom. Bénabou published a straightforward explanation of the algorithm in fascicle 13 of the Bibliothèque oulipienne, “Un aphorisme peut en cacher un autre” (“One Aphorism Can Hide Another”). Braffort’s program seems to obfuscate Bénabou’s algorithm. How can we read the reprinted code?
I will argue that the program demonstrates how the algorithm works by foregrounding the materiality of computer language. In their pursuit of potentialities for literature, the Oulipo makes a distinction between the invention of constraints and their application in the fabrication of texts. The classic example of Oulipian invention is Raymond Queneau’s Cent Mille Milliards de Poèmes, a small book- machine that allows the reader to produce 100,000,000,000 distinct poems. Braffort’s program is an intermediate text (Espen Aarseth would call it a cybertext and N. Katherine Hayles a technotext) between Bénabou’s algorithm and the reader’s instantiation of aphorisms. By examining the program the reader can ascertain how modifying the code (e.g. changing the number of formulas and/or words one can use to produce aphorisms) determines the potentiality of the algorithm. The algorithm alone does not signify the vast number of potential aphorisms: one must observe, albeit analogously through the medium of print, how a machine reads instructions and produces output. The reprinted code is a remediation of how the machine proliferates texts through its own language.
This paper will consider how material practices of computer programming influenced the turn to linguistic practices in art from the 1960s to the present, as well as the development of conceptual art. I argue that the turn to language contends with the problematic invisibility of the processes of digital computation. Unlike the analog computer, where gears and dials reveal the workings of the machine in an insistent, visible materiality, the operations of the modern digital computer are essentially concealed. Nevertheless, the legibility of code fantasmatically insists that the operations of the computer remain visible. Exploring the forms created by Kenneth Knowlton with Leon Harmon in Nude (1966) and with Stan VanDerBeek in their Poem Field series (1964-70), I find a hyperbolically visible language that grapples with code’s address to the other of the technological apparatus. This work does not aspire to “lay bare” the computational apparatus, but to reveal that the modernist insistence on transparency has been upended within a postwar culture of technological miniaturization. In these works, numbers and letters and schematics flicker into cascades of pictorial representation. This work mines a terrain between visibility and invisibility, with the surface of the image appearing in a curious double vision of text and image, code and picture. The tease between legibility and illegibility in these works vociferously problematizes the essentially inhuman address developed by computer code. I argue that in looking back to this early work, we can reconsider linguistic practices in art as the site of a crucial engagement with the nascent culture of computation.
Panelists will consider the ways in which new media practice has informed and can continue to inform new media criticism and theory. This panel will explore the relationships—rhetorical, ideological, and actual—that exist within the academy between multimedia, materiality, labor, and traditional divisional models of research. We seek to map the complex network of relations that exists between intellectual labor, institutional definitions of authorship, technical labor or code, and the challenges faced by authors and presses in producing and distributing multimedia titles. It is our hope that such a discussion will move us closer to a more coherent analysis of the process of making multimedia in the academy and the practical challenges faced by scholars attempting to push the envelope by producing meaningful works of scholarly multimedia.
Unfaithful to the traditional humanities model of disembodied intellectual activity, multimedia scholarship tends toward what Science Studies scholar Andrew Pickering calls the “performative idiom,” a model that foregrounds the material agencies (human, natural, and technological) that emerge as essential to the scholarly production of knowledge. This presentation will situate the institutional tensions between traditional scholarly practice and new media within larger theoretical and disciplinary contexts in order to demonstrate how new media challenges the ways the traditional humanities scholar has been imagined as having a secure and stable position within institutionalized hierarchies of knowledge production. I will consider how scholarly multimedia threatens the coherence of humanities scholarship by insisting on the re-embodiment of scholarly praxis. Furthermore, I hope to re-imagine multimedia scholarship as “cyborg or networked scholarship” that is situated within materially significant intellectual and technical networks of knowledge production, In the end, I hope to demonstrate how far scholarly multimedia has come, and will suggest that we move toward a more nuanced understanding of the material and intellectual potentialities of multimedia as scholarship.
Multimedia authors are in an unique bind when it comes to the doctrine of “fair use.” In addition to decisions over whether it is “safe” to use, say, a video clip or a digital image, the multimedia author must factor in the provisions of the Digital Millenium Copyright Act, which state that all digital copying of material—whether lawfully purchased and fair for use or not—is a criminal violation. The Act was only recently revised to carve out a special provision for educators, basically indicating that they might be copyright criminals, but probably shouldn’t be punished for making a copy for the classroom.
No such fair use or copy provision applies to practitioners of multimedia texts that are not destined for the classroom. The law here is sufficiently murky that Stanford’s Copyright & Fair Use guidelines warn that “The proposed guidelines do not permit reproducing and publishing images in publications, including scholarly publications in print or digital form.” It’s my argument that we are dealing with difficult questions that ultimately have not so much to do with technology and commerce (although they do), but about genre: the genre of scholarship, and the genre(s) of multimedia. In other words, what is criticism? Is it “educational”? And what is multimedia, anyway?
If a stone tablet doubles in size, it weighs eight times as much. What is true for tablets is true for other media. As the dimensionality of our media grows, the authoring effort increases super-linearly. With open source tools, and utilizing crowd sourcing, massively multi-dimensional multi-media products can be produced on minuscule budgets. In open source, all products are available at all times for copying, or branching the project in an alternate direction. Unless the geeks work together, they will splinter into less productive sub-groups. Gangs of geeks generating new media become reliant on each other. Individuals learn to serve the needs of the group, often constraining their own ideas to those endorsed by the group. In one sense, this is old news. Open source multi-media generation is just relearning a centuries-old lesson that crowds can generate more than individuals. Shakespeare’s portfolios are excellent examples of crowd sourced content generation. The plays themselves were remixes of older stories. Actors improvised portions of the plays before they were recorded in the portfolios.
But what’s old is now back in the news. Our legal institutions, fixated on ownerships or corporate property “rights,” actively block crowd sourcing. Academic institutions (read “tenure committees”) give little credence to “team players.” Yet modern media authors must enlist in an army to complete multi-dimensional masterpieces. Do you like the ten people sitting next to you? You’d better—they’ve just become as important as your heart beat for completing your next project. But before you get together, you’d better generate an acceptance of modern models of accreditation that move beyond concepts of “I”, and that acknowledge “us.”
In Richard Powers’ Galatea 2.2, the central character, a novelist, collaborates with a cognitive neurologist in an attempt to program an Artificial Intelligence so that it will be able to pass the Graduate Record Exam in English Literature. This semester, after ten years away from the course, I am teaching Introduction to Literature. Many days, standing before my students, I feel some of the frustration of Powers’ character feels when attempting to teach untrained neural networks about literature. For what is a classroom of intelligent non-majors at a tech school but a collection of untrained neural networks? (Untrained in literature.) I mean no disrespect to untrained neural networks, nor to untrained students. I have a professional interest, though, in how they learn. Using accounts of classroom discussions and student blog posts, I will offer a brief meditation on how students learn to read inside literary conventions such as the common notion that a poem has a speaker who is not necessarily the poet, how they distinguish metaphor from literal statement, and how they understand plot devices in fiction and drama, and how they recognize or fail to recognize meter. Along the way, I will appeal to Powers’ character for insight, understanding and commiseration.
Philosophy of science credits the criterion of simplicity with a substantial role in the formulation and acceptance of scientific theories. But how do we manage to evaluate simplicity itself? Taking as my starting point the multifaceted representation of code in Richard Powers’ The Gold Bug Variations, I will examine the problem of simplicity in science in terms of a metaphor—scientific investigation as an exercise in interpreting encoded messages—and argue that the criterion of simplicity is far from straightforward, and that the use of precepts such as Occam’s Razor to resolve issues in scientific practice or application of scientific findings is most often useless or even harmful.
Which frightens the new generation of readers more profoundly—slavery or sexuality? It might seem that in these days of widespread sexual frankness, young people would have few fears left about sexuality. However, responses in an almost perfect laboratory for young anxiety—a large, reading-intensive, freshman-level class in science fiction—indicate that loss of liberty is not nearly as frightening as loss of “normalcy.”
Octavia Butler wrote that her landmark story “Blood Child” was not a slavery story but a “pregnant-man” story. Yet when offered the chance to write about connections to slavery OR her comment about the “pregnant man,” not a single student addressed the pregnant-male issue! However, the pregnant male could not be avoided in Ursula LeGuin’s Left Hand of Darkness. What did students do then?
Through an examination of student responses to these two science-fiction classics, we propose to watch these two works do what science fiction is best at, revealing who we are now through the lens of who we could be.
The insights of theoretical physics of the 20th century have revolutionized traditional concepts of space and time, and the impact of these ground-breaking scientific changes continues to concern present-day philosophy and cultural theory. Helmut Krausser’s novel UC presents an intriguing attempt to incorporate the resulting conceptual shifts into the poetic structure and content of a literary texts. It adepts Heisenberg’s uncertainty principle, Feynman’s sum over histories, and string theory into its narratological set-up as well as into its notion of the fictional reality. The result is a highly complex text that entails a multitude of narrative perspectives and that addresses a new understanding of time moving past mere chronology into the spheres of multi-dimensional worlds and parallel realities.
The novel uses the genre of criminal fiction in order to create an ontological plot structure which represents a traditional chronological understanding of cause and effect. The protagonist, Arndt Hermannstein, ranks among the primary suspects in a rape and murder investigation of a case that dates 22 years back. This traditional notion of time gradually breaks apart when memory seizures and amnesia jeopardize Hermannstein’s ability to recollect any of the events. The introduction of multiple perspectives which are frequently contradicting each other further amplifies the resulting mnemonic difficulties he experiences. The texts seems to follow the uncertainty principle which has established the notion of contingency and unpredictability as parameters that need to be taken into account in any experiment. Herrmannstein’s experiences symbolically represent the loss of a formerly supposed certainty and the following disillusionment. Analogous to Feynman’s theory of the sum over histories where one particle follows more than one track simultaneously, he encounters different realities that seem to exist parallel to each other in time. In addition to the performance and description of epistemological uncertainty and parallel realities, the texts theorizes temporality by introducing a philosopher and writer who turns out to be the narrator of the very story that he is part of. He employs metaphors that string theory commonly uses in order to conceptualize realities with up to eleven dimensions in order to introduce new temporal constructs, namely Hyperchronos or HC, Polychronos or PC, and Ultrachronos or UC. The title of the novel as well as Hermannstein’s increasing involvement in contingent realities suggest that the texts is designed as a thought experiment to describe an experience beyond absolute space and absolute time, believes that are up to this day still commonly shared.
When in the 1980s and 1990s cyberpunk stormed through science fiction worlds and became the object of much critical attention and theorization, one of the flagship elements of its vision of near-future (post)humanity was the digitization of not only the world but also of consciousness. In fact, however, relatively few canonical cyberpunk texts explored ramifications of the digitally-transferable identity and none really offered its sustained discussion.
The goal of this paper is to examine the constructions of the figure of the encoded human in two recent science fiction trilogies – Shane Dix’s and Sean Williams’ Echoes of Earth, Orphans of Earth, and Heirs of Earth, and Richard Morgan’s Altered Carbon, Broken Angels, and Woken Furies—and the ways in which they depart from and undermine the Descartian paradigm. Apart from the fact that the two cycles offer two different visions of how sustainable the digitized subjectivity can be, they also re-link the mind and the body and radically problematize their relationship. While the identity itself remains digitizable in both series, the texts in question either suggest that human mind is susceptible to decay while disembodied (in Dix and Williams) or that it is intimately connected with the body, even if the latter is exchangeable and disposable (in Morgan). The paper will also attempt to locate the two cycles and their portrayals of encoded humans in a broader context of posthuman narratives.
The coded patterns of written plays and subsequent film scripts simultaneously cipher and decipher as viewers encounter them through the mediation of film. In this paper I argue that Gilles Deleuze and Felix Guattari’s tetralinguistic approach towards ’minor literatures’ exposes this ambiguous (de)ciphering aspect of code. Deleuze and Guattari claim this allows one “to forge the means for another consciousness and another sensibility.” In this presentation I take a critical look at code and its relationship to plays and film by examining the 1999 film adaptation of Shakespeare’s play Titus Andronicus, Titus. In exposing the various voices at work in the filmic representation of the code, Titus, this presentation shows that language itself moves beyond representation and into a performative dimension.
Though Shakespeare’s established code of Titus Andronicus resembles the vernacular speech in the film, I show that the code-script of a play or film requires more than dialogue to be complete. By drawing the language of Titus towards its extremities, I place code in a position where its relationship to performative languages is visible and available for questioning, while also providing an opportunity to question aspects of film and ’minor literatures.’ Using Deleuze and Guattari’s tetralinguistic approach, I examine the four voices present in Titus: the vernacular speech, the vehicular film conventions, anachronistic referents, and mythic representations of emotion and madness. Through the coexistence of these disparate and performative elements, the code Titus becomes film.
Clearly observable on the surface of cyberpunk SF is a transhumanist desire to deconstruct the traditional humanist notion of subjectivity and to revel in the blurring of boundaries between the organic and the synthetic. However, churning underneath this cybernetic revelry is a repressed desire to celebrate the human and to recapture lost humanity in all its physical limitation and metaphysical complexity. In this presentation, I investigate the degree to which William Gibson’s Neuromancer and Neal Stephenson’s The Diamond Age complicate and ultimately reject much of the transhumanism ascribed to cyberpunk SF.
The virtual realities of Gibson and early Stephenson dazzle readers with the seemingly limitless wonder of the Net. Curiously, though, as appealing and intoxicating as the Net is for the characters, there remains a longing for the real. The characters ultimately reject the virtual to experience the real, seeking meaningful relationship with other humans in the physical world as opposed to transhumanist realities. For all the cyberpunk hype, Neuromancer actually reveals the horror of lost humanity within transhumanism and ultimately celebrates the human. This celebration of the human and its physicality is amplified in Stephenson’s The Diamond Age, which figuratively kills the cyberpunk agent in the opening sequence of the novel and transports readers into a world of nanotechnology. However, this new technology is modeled after and often depends upon human biological systems, thus removing the human from the virtual and creating a biological network of interconnecting human bodies located undeniably in the real. These two novels serve as examples of how cyberpunk SF may actually not be such a transhumanist celebration as some have suggested.
In The Moment of Complexity, Mark Taylor writes: “We are living in a moment of unprecedented complexity, when things are changing faster than our ability to comprehend them. […] To understand our time, we must comprehend complexity.” (3) The philosopher is called to the task of developing an interpretation of an emerging network culture which entertains an ongoing dialogue between the sciences and humanities but also between different disciplines.
But how should we go about decoding complexity? Complexity (from the Latin “cum-plexus”) can be understood as an ensemble, one which embraces, encompasses, or connects several heterogeneous discursive terms. It requires a dynamic and flexible decoding model, one which is neither too much nor too little ordered, a structure where order finds itself always at the margin of chaos. For Mark Taylor, such a structure is “a seamy web in which what comes together is held apart and what is held apart comes together. This web is neither subjective or objective and yet is the matrix in which all subjects and objects are formed, deformed, reformed” (12).
I am interested in exploring the hermeneutical possibilities of the network as a decoding model in history, a discipline which is usually founded on a chronological linear narrative. In particular, I analyze the recent book by J. R. McNeill and William H. McNeill, The Human Web (2001) which shows how, since the earliest times, history can be viewed as a web of connections that link people to one another and allow them to exchange information.
The New York Times Magazine recently ran a cover story about “designer dogs.” As I listened to a radio program on this article, in which the announcer and various callers talked about this “shocking” trend and deplored the way we were shaping dogs for our own convenience, I began to wonder why is it that the dog, man’s best friend, raises so much furor?
I suspect that it is exactly because the dog is man’s best friend that the instinctual reaction is so strong. The new designer dogs represent a feminization of the species. The very term designer speaks to a commodification that is implicitly linked with feminine traits. Designer dogs seem to gesture towards consumption for its own sake: these dogs were not bred for some specific job, but for their appearance and caché. Their appearance also speaks to another way in which dogs have been feminized: these dogs tend to be small, fluffy and portable.
The irony, of course, is that any recognized breed of dog has been shaped for specific purposes. These new dogs, however, represent a trend toward designing dogs for purposes that are viewed as more “feminine.” I argue that this discomfort extends far beyond dogs to modern microbiological techniques such as cloning and gene therapy: when we express fears about designing the perfect baby, are we worried about the way in which we are perverting “nature’s code,” or the facility with which the most masculine of traits can be altered within a few generations?
The new modality of the new media arts, according to some who see space and time, as well as matter and energy, as already malleable is informatics. Informatics includes the science of information, the practice of information processing, and the engineering of information systems. The theories and practices of informatics, built on assumptions of the transferability of the digital and analog, have encouraged new forays into the biological. New media arts (as well as the contemporary sciences) more recent moves into nano and bio art may be seen as an outgrowth of interest in bioinformatics and computational biology. What role does the animal play in these scenarios of art and science? What can we learn from the metaphoric and material role of the animal in new media arts of bioart, robotics, and artificial life? And what do those roles mean for animals themselves?
This paper explains the bull breeding technologies that have facilitated the phenomenon of bovine celebrity as a promotional tool for professional bull riding since 1990. Since the 1970s, rodeo industry managers have been attempting to make bull riding an independently profitable extreme sport, advertised these days as “The Toughest Sport on Dirt.” Cowboys and bulls now often get equal billing as celebrities, often with generous corporate sponsorships. Fans are encouraged to follow “bull standings” (buck-off statistics and points-earned averages), to read the bulls’ profiles, and to purchase endorsed toys, websites subscriptions, magazines and other products that grant each bull an identifiable brand image and personality as “animal athlete.” Consequently, rodeo stock contractors who create those “big buckers” are the new self-made men of late-twentieth century American sports-entertainment, pioneering specialized uses of animal science that have created some extraordinarily valuable bulls.
This paper examines the industry-specific codes describing bull behavior and performance that have made rodeo animal science available to the broader audience of bull riding fans and corporate sponsors. I explain all of these developments as a result of bovine agency, and the broader human struggle to contain and shape animal behavior to the needs of humans and business. I argue that rodeo animals like bucking bulls offer scholars a way of thinking about the natural history of technology and capitalism in a more comprehensive, inter-species manner.
This paper is concerned with understanding computer music in political terms: how its early history and formal structures are bound up with cybernetic theories (themselves forged in relation to high-technology military agendas); and how artistic deployments of computer music may accomplish politicized critique of the ideologies from which its technologies and forms emerge. To address these issues, I will examine the work of composer and philosopher Herbert Brun, which is uniquely situated at a nexus of cybernetics, early computer music, and countercultural critique. Brun formulated many of his ideas in the context of cybernetic research at the University of Illinois in the 1960s and 70s, and co-taught courses with Heinz von Foerster at Illinois’ Biological Computer Lab. In 1976, Brun developed the SAWDUST music programming language, which used a nonstandard technique of waveform synthesis that embraced the generative powers of the computer as a means of transcending the limitations of human language. SAWDUST included commands like “mingle,” “merge,” and “orientation,” terms that imply a simulation of social interactions within the ordering of sounds themselves. By focusing on disruptions of the waveform, and forging new relationships between its individual “constituents” and on “transformations” between them, Brun imagined a potential for enacting social radicalism within the form of synthesized sound. I will discuss Brun’s work in SAWDUST as both a musical and programmatic code, and a social and political code as he envisioned it. I will also discuss Brun’s legacy in relation to subsequent structures and politics of computer music.
The threat of the hegemonic programming of cultural memory by the fourth estate demands responses from resistant subjects. When a resistant subject opposes mass-mediation, her relatively pitiful resources limit this counter-mediation, and when using the same methods as the fourth estate, she increasingly lags behind it. Instead, resistant subjects must work to actuate and maintain cognizance of the mediation and revisability of information patterns. Such work includes preventing mass-mediated images from saturating the visible.
Focusing on photography as a means of resistance, I consider Roland Barthes’ notion of punctum in developing an argument that the resistant photographer must pursue maximum variety among photographs, beyond the limits of intentionality. Among produced photographs, variety ensures that the introduction of these photographs into image-filled situations creates differentiation where image patterns otherwise approach undifferentiated continuities. Variety is prerequisite to the de- and re-programming of memory codes because differentiation lets information in visual space be read rather than simply experienced. To maximize variety among photographs, the photographer must maximize her use of a variety of methods of photographic production. Only to the extent that the photographer circumvents her own intentionality, conscious or not, can the results achieve enough variety to effectively differentiate. Perhaps the photographer can best conduce photographic variety by initiating self-modifying mechanisms that develop through recursive processes, which include alterations to their own formulas of development, thereby removing development from the technics of the human eye and hand.
Artist Rebecca Belmore and novelist Larissa Lai attend affect as an animating force in their work. Engaging the space around the writing and the performance of art is crucial to the way their work remobilizes culture as embodied practice—vital to re-articulating the work of art in an age of hypermediation and coded digitization of material bodies and social relations. This paper will investigate the politics and philosophical pragmatics of affect in contemporary literary and visual production in Canada. Drawing from collaborative research on works by Lai and Belmore, we will investigate code not simply as metaphor, but as critical-creative performance with the potentiality to de- and re- activate prevailing struggles around aboriginality and racialization in a North American colonial present. Belmore’s Vigil, performed during the Talking Stick Aboriginal Arts Festival in Vancouver, British Columbia, 2002, takes the affective body as the site of code alteration. Lai’s Salt Fish Girl provides a feminist re- writing of Ridley Scott’s Blade Runner (itself an adaptation of Philip K. Dick’s Do Androids Dream of Electric Sheep) that explores questions of cloning and genetic modification by resituating the present within a dystopian, not-so-futuristic Vancouver.
To unpack the various intersections and differences assumed in our discussion of the work of these two artists, our presentation will look to theories of affect (Massumi), digitization (Mitchell, Hayles, Braidotti), the technologization of gender and race, and recombinant realities (Stacey). Our question, put simply, concerns the ways in which technology reconfigures creative praxis: what is the work of art? How does literature circulate beyond books? What are the critical topographies of performance.
This panel will consider some of the more nagging questions and persistent problems raised but unresolved by recent scholarship on embodiment (e.g. Hansen, Hayles, Massumi, Munster, Sobchack). Literally meaning “putting into a body from without,” em-bodiment necessarily operates against and through materials that are not its own. Embodiment requires historical contexts for its actualization. Experiences, performances, and concepts of embodiment derive from already historical, marked, contingent bodies. This year’s conference theme “code” – often suggesting the transformation and re-inscription of existing bodies from one medium into another – reminds us that disembodiment occupies a prominent place within articulations of embodiment. For these and other reasons, our panel will consider whether there can be a meaningful notion of embodiment without something “against embodiment.”
This paper considers the burgeoning field of embodied conversational agent (ECA) research and how it complicates theories of embodiment in human-computer interaction and phenomenology alike. ECA researchers argue that embodiment is a fundamental condition of human computer interaction and use embodied virtual agents to provoke embodied performances from human users. Watching humans react to machines and making machines imitate humans researchers claim to discover rich, often obscure complexities characterizing human embodiment and its dialogic performance with electronic bodies. Hence, digital technologies long charged with eviscerating human embodiment provide occasion for its enunciation and insistence.
However, the vision of human embodiment elaborated from these experiments is peculiar. I detail these experiments’ particular focus on gesture, and humans’ tendency to assimilate, mimic and embody gestures from their machinic others. As the body of human and machine emerge, each embodying the characteristics of the other, certain human-centric theories of human embodiment falter. Instead, human embodiment takes form as a self-differing movement of exteriorization, defined by a lack that drives technical assimilation from without. I argue that these experiments’ conceptual richness and promise stems from organizing human and machine in relations of composition, rather than opposition (Stiegler). This allows a complex articulation and re-discovery of the native human as indebted to technics. Out of this I provide an account for humanism’s renewed relevance in a new media age.
In her article “The Materiality of Informatics,” N. Katherine Hayles suggests “a new postmodern subjectivity has emerged” with the “crossing of the materiality of informatics with the immateriality of information.” In other words, the intangible code of information and the tangible substrates—including the physical body—that allow for the flow of information have come together to form a new, more fluid subject. This fluidity allows us to re-create our subjectivity at will, dependent on which pieces of information are collected and fitted together.
If, then, as Hayles suggests, subjectivity is fluid, what is to stop us from creating multiple versions of ourselves by assembling different pieces of information and “downloading” them into a new substrate such as a robot? How “accurate” would the robots be? What if we lost control over our personal information and someone else created the new versions? What if another human was attracted to this new robot version instead of the original human version? This paper will explore these and other questions pertaining to embodiment and the body through an examination of an episode of Matt Groening’s animated series Futurama called “I Dated A Robot.”
In this talk Anna Munster discusses some of the problems of embodiment raised and unresolved in her recent book Materializing New Media: Embodiment in Information Aesthetics. Speaking via a live webcast from Australia Dr. Munster’s talk will instantiate the very problems thematized by her discourse.
Biology affects society, especially when it comes to the brain. Neuroscientific research defining subjectivity constrains the parameters of a subject’s sense of reality, including her encounters with art that could otherwise have challenged those parameters. Relatedly, coognitive neuroscience assumes relations between the mind and the brain, but foregrounds the social mind over the biological brain. The ethical imperative to include the biological in considerations of the social is frequently left unacknowledged. The social can constrain the biological: transmissible spongiform encephalopathies (TSEs) bear the stigma of human cannibalism. Since discovery that cannibalism can transmit—but not cause—TSEs, cannibalism’s stigma remains, subtler but more persistent in the form of industrial agriculture’s practices of feading livestock their own kind. This social, economic, biological and epidemic practice broadens the implications of a “cannibalistic society,” that consumes itself in order to survive. Our panel investigates the costs and necessities of sheer brain biology.
In his unpublished Adventures in Lobotomy, lobotomist Walter Freeman claims that “no account of lobotomy would be complete without a discussion of the effect [of] newspapers and popular magazines.” In this paper, I examine the representation of both the lobotomy procedure and lobotomized patients in these media. When lobotomy enjoyed its status as the newest and most progressive treatment for mental illness, its representations in the press focused on the conditions of the patients before the procedure; in these early representations, nearly every “case” used to demonstrate its success was female. In the late 1940s (specifically, 1946), when the tone of the popular press toward lobotomy became negative, the case studies represented were almost all men. This paper argues that this shift in tone and narrative content not only reflected dominant ideologies of gender and medicine, but that these later representations (men as mindless, unfree, “zombies”) were largely influenced by the increasing Cold War panic about liberal subjectivity.
Cognitive psychiatry has found useful the analogy of the brain-as-computer. This “cyber-psychiatry” replaced organic notions of the brain with cybernetic models of code crunching, information transmission in bytes and digits, and circuited confusion. Outside of cognitive science, the mind-machine interface, with frequent allusions to neural networks and code transmitters, has become popular in literary imagination. Opposing, this brain/computer analogy, neuropsychiatry has argued for a representation of the brain complex enough to understand its dynamic physical states. It is this representation—one that might replace the brain/computer analogy in science as well as literature—that will be necessary if a neuroethical code is to emerge, capable of addressing issues from genetic marking to drug therapy. Cultural, social, linguistic, or literary representations, if we presume an ontological separation with the physiological particularities of the brain, will be conceptually inadequate. By moving from a cyber-psychiatry to a neuro-psychiatry, I hope our literary imaginations will move us outside of the old metaphor of a centralized, machinistic, coded, and informational nervous system to a critical, though empathetic, engagement with embodied and intimate neurological matter.
When scientists first discovered kuru, the first prion disease known to affect humans, they associated it with cannibalistic practices occuring within the population that harbored the disease. For a while, kuru was thought to be caused by such practices; observations of other prion diseases, such as scrapie (in sheep) and Creutzfeld-Jakob disease (in humans) not associated with cannibalism disproved this hypothesis, though transmissible spongiform encephalopathies (or TSEs, as prion diseases are called) continue to carry connotations of the savage and exotic. More pressing than what caused the disease in the first place is the question of how prions reproduce themselves once they are in the body. Unlike viruses or bacteria, which reproduce via DNA and RNA, prions do not have a visible, measurable or even predictable reproductive process. So while pathologists have been able to crack the DNA and RNA codes of many bacteria and viruses, prions have no such obvious language between them. Moreover, prions have continued to be associated with (an (albeit more mundane version of) cannibalism, with the rise in industrial agriculture of feeding livestock their own, justified as an economic necessity. Given prions’ associations with same-species consumption along with their inexplicable communicability, prions occupy an isolated and self-destructive spot in pathology and in the cultural imagination. My paper will explore how prions have come to occupy that space.
We offer a reading of The Sumer Game, also known as Hammurabi. This program, the first popular political simulation, was originally written by Rick Merrill in his FOCAL language for the Digital PDP-8. Hammurabi allows the user to govern through simple actions such as determining how much land should be bought or sold, paving the way for Sim City and many contemporary simulation games. The program was popularized in David H. Ahl’s 1978 BASIC Computer Games and went on to be often ported, rewritten, and adapted by computer hobbyists. Through a deep reading of the code itself, we characterize the relationship between this code as an executable program and the human, textual meanings that this code had—during its original development, for a programmer porting or studying the game, and for a player looking at the code to figure out how the simulation works. We discuss the politics of Hammurabi’s simulation by looking at the model it presents of the management of a society, the concept of the computer’s role in that management, and the way it was constituted in accessible code. After an era of mainframe computing in which the computer was often seen as an omniscient central planner, it would be easy to finger this early “god game” as pro-centralization. We show, however, how Hammurabi may have actually helped to change the image of the computer as a bureaucratic and potentially dictatorial calculator to that of a tool for thinking about the world.
Since Kenneth Burke broke rhetoric’s coupling from orality and writing, rhetoricians have investigated how other media with different inscriptive practices construct arguments. One of the more widespread of these trends is visual rhetoric, the study of constructing arguments with images, as in photography or advertising. Another is digital rhetoric, the study of constructing arguments with computer-based writing.
But computers enact representation not by producing images nor by digitizing text—although they are certainly capable of doing these things. Instead, computers create representation in code. The practice of inscribing rules of behavior into a computational system through the authorship of code is sometimes called procedurality.
I suggest a new domain for rhetoric, which I call procedural rhetoric. Procedural rhetoric is the practice of using processes persuasively, just as verbal rhetoric is the practice of using oratory persuasively and visual rhetoric is the practice of using images persuasively. Procedural rhetoric is a general name for the practice of authoring arguments through processes; it is a kind of rhetoric that makes claims about how things work by constructing models of how they work, rather than by describing their function in voice, letter, or image.
To illustrate the function of procedural rhetoric as an analytic strategy, I focus on videogames—a type of software that relies more on code than on images or text—offering examples of how to read videogames for the procedural arguments they construct. The discussion covers both popular commercial videogames and experimental/artistic videogames; some original games by the author will also be demonstrated.
Critical Code Studies (CCS) names a set of approaches to the interpretation of computer source code, explicating meaning beyond mere functionality. Building on my article in the electronic book review and presentation at MLA 2006, I will develop the argument for CCS, while suggesting and modeling a variety of interpretive techniques. To further develop these approaches, I will present and extended reading of a body of LISP (List Processing language) code used to model terrorist networks. The demonstration will focus on a Deductively Augmented Data Management system adapted as a counter-terrorism tool by Stephanie August in the 1980s. This code not only models the behavior of governments and organizations but also attempts to systematize the mental process of logical inference for the use of military strategy. My reading will trace out the ways in which transnational information-sharing becomes a means of command-and-control through the modeling of terrorist entities, transnational alliances, and the circulation of information between insurgent military groups.
Science fiction studies in literature, film and television forms a reasonably well-developed field of inquiry, but academic analyses have only recently turned to the relationship between science fiction and music, and the intersection of sf and heavy metal has as yet received little critical attention. An examination of science fiction inflected metal reveals many of the same concerns that surface in the sf of other media, including cultural hopes and fears surrounding the implications and consequences of technological and scientific advancement. Some of the most fully elaborated of such works include the technologically oriented albums of the bands Voivod and Fear Factory. Merging the guitar-based aggression of underground metal with high-tech sf themes and digital music-making technology, Voivod and Fear Factory have produced recordings that do not simply feature science fiction lyrics—they also explore humanity’s relationship with technology and science by incorporating such themes into their soundscapes, artwork, liner notes, and videos. Ambivalent rather than clearly technophobic or technophilic, these works expose a heightened awareness of technoscience’s promise and threat, most often characterized in terms of hope for transcendence and salvation through technology or, alternatively, fear of technology’s destructive and lethal power. Through analysis of selected lyrics, sounds and images from Voivod’s and Fear Factory’s official album releases, and with reference to the critical work of Hayles, Ryan and Kellner, Sobchack, Telotte, and Winner, this paper will discuss the heavy metal perspective on advanced technoscience and its relationship with broader cultural debates, anxieties and desires.
During the debate over human reproductive cloning, reporters, ethicists, and policymakers alluded to Shelley’s Frankenstein and Huxley’s Brave New World as shorthand for the threat posed to society by this new technology. This invocation of literary works framed the debate about human cloning as one based on cultural values rather than technological feasibility.
Much of the two works’ rhetorical power when appropriated by opponents of scientific research comes not from the actual stories, but from the meanings the stories have taken on over time. Science fiction’s role is to provide plausible stories of what happens in a culture infused with technology; thus, these two novels have come to function as modern myths. In this role, they can be used as coded references for the potential troubles associated with scientific and technological innovation.
In this paper I consider the cultural importance of these two works by analyzing their primary themes. Though the two works are indeed concerned with the power of science and technology in society, this is not the only critique offered by either author. An examination of the novels in their historical contexts illuminates the concerns the two authors shared. The main fear for both authors was the use of technology without regard to human dignity. The lesson drawn from the works indicates that rapid technological change is not necessarily destructive if combined with oversight and limits.
The perspective of the posthuman offers us an opportunity to move beyond the antagonistic relationship between nature and technology that has persisted throughout Western thought. Furthermore, it also accounts for a definite tendency in the imaginations of various science fiction texts, from cinema to pulp novels. But the posthuman has its limitations as a scholarly hermeneutic. The goal of this paper will be to examine gadgets, both as material technologies and literary tropes, as technological artifacts that defy the posthuman and work actively (actantly, even) to reconstitute liberal humanism in the context of contemporary American technoculture.
Particularly striking examples of this “gadget logic” include the filmic adaptations of Philip K. Dick’s short stories, “The Minority Report” and “Paycheck.” Their deployment of digital video as a code for understanding memory and cognition presents audiences with no definite posthuman fantasy. Instead, as we shall see, the adaptations articulate a brand of ambivalent technophobia that at once embraces the ubiquity of information technologies and at the same time demands that a human meta-subject always be stationed at the controls. The category of the human, no matter how elaborate the technoculture, still matters as an imagined locus of liberal (and capitalist) subjectivity.
This paper examines two works by Maine publisher and author Fitzallen: a traditional story of seduction, and, an exception to the rule. In works such as The Saco Factory Girl, mid-nineteenth-century sensational fiction imagined textile factory workers as the inevitable victims of men. Seduced and abandoned, the prototypical factory girl ended her life in illness, prostitution or death. Authors inscribed these dismal fates on female bodies, displayed in physical features and unconscious behaviors that signaled a prurient opportunity available for a seducer’s, and the reader’s, hungry gaze. However, in Fitzallen’s The Biddeford Factory Girl, Adelaide Richardson, robbed of her virtue and her money, rewrites her fate by recoding her body. Passing into a new race, class and gender disguised as a black servant man, the Biddeford factory girl seeks revenge on her seducer and the father who disowned her. Her body, once the source of her victimization as the subject of her father’s will and the factory’s labor, and, the enticement to seduction, becomes the instrument of her revenge as she robs the thief, regains her status, and retrieves her wealth with interest, revealing a keen understanding of the position of women*s bodies in the cultural and capitalist economy.
This paper examines the interrelationship between Victorian law, women, and the process through which subjectivity becomes encoded in texts. Historically a vehicle for the transfer of property between men, the woman’s body has no value in itself; it is a vile body that becomes valued only in relationship. This economic structure is made excruciatingly apparent in Rider Haggard’s 1888 novel Mr. Meeson’s Will, in which the heroine, stranded on an island with a dying millionaire and two sailors, encourages him to have his last will and testament tattooed onto her back and then suffers the further mortification of being offered as evidence in a trial over its validity. In this way, the tattoo and the will function as textual markers of subjectivity but raise the question of exactly whose will, whose labor, and whose identity they represent. This last issue is central to the ensuing courtroom debates over whether Augusta is a woman or a will and raises the further question of how law encodes and decodes human subjectivity.
Conventionally known as the first self-identified Chinese American writer to publish journalism and fiction, Edith Maude Eaton chose to write under the Chinese pen name of Sui Sin Far and to champion the cause of Chinese immigrants during the most sinophobic period in American history. Anti-Chinese sentiment in the late nineteenth century fueled a proliferation of exclusionary policies and sites of surveillance, which relied upon a racialized dichotomy between healthy bodies and those were both sick and sickening. In a gesture that both resisted and appropriated this normative and exclusionary discourse, Sui Sin Far marked herself not only as biracial with Chinese allegiances, but also as a neurasthenic, during a time in which such a designation was coded as both white and leisure class. In this paper I explore Sui Sin Far’s invocation of nervousness as a counter-discourse to the technologies of surveillance, medicalization, and racialization that underwrote Chinese exclusion from American territory and identity at the turn of the century. Countering dystopic constructions of Chinese immigrants as diseased invaders threatening the health of American society and its white bodies, her writings manipulate prevailing codes of the body to valorize nervousness, fluid racial identities, and border-crossing among Chinese immigrants as alternative sites of healthy subjectivity and community identity.
This paper draws on examples of human-robot relations from fact and fiction to explore the idea of codes in communication between humans and machines. The examples enable discussion within and beyond current limitations in robotics technology, and also highlight resonances between fact and fiction to support the argument that robots need not be humanoid to sustain sophisticated human-robot interactions.
The term ‘machine code’ is normally used to describe low-level computer programming languages. However, this paper offers an alternative understanding: one of ‘machine codes’ as social codes of speech and body language used by robots to communicate with humans. Real-life ‘sociable robots’, such as Kismet at Massachusetts Institute of Technology, support this idea that robots can facilitate interactions with humans by communicating using social codes.
Sociable robot design follows the assumption that human form is required to facilitate human-robot relations, and Kismet’s interactions are seen to rely on its human-like facial expressions. However, this paper argues that non-humanoid robots could also develop sophisticated interactions with humans using language and emotional expression, without relying on similarities in form. This argument is supported by a consideration of human relations with drones, the non-humanoid robots in Iain M. Banks’ Culture novels. Drones share a common language with their human counterparts and communicate using tones of voice and coloured auras to encode their emotions. The rich interactions between drones and humans are used to provide support for this paper’s contention that non-humanoid robots offer many interesting possibilities for the development of meaningful relations with humans.
What are our inner, hidden experiences that desire to be heard, to become external to ourselves, but instead become compressed into the confines of our linguistic systems? What does it mean to express the unspeakable to a machine? To another person? Antonin Artaud suggests that “to make metaphysics out of a spoken language is to make language express what it usually does not express”. Given the limitations often imposed by living in Western societies Artaud’s entreaty to “turn against language” remains limited to artists or the insane. However, using a technological artifact as a mediator can enable one to perform actions not otherwise taken. In this paper I present my work developing and using syngva, a robotic creature that moves in response to non-speech human vocalizations. The creature and human exist in a self-sustaining loop: movements of the creature encourage the person to explore new types of non-speech sounds, while the creature listens to the vocalizations and develops new types of movements partly through evolutionary algorithms. Early experiences with the creature show the deeply personal sounds that are drawn out of people through interactions with the robot. The creature’s mediating presence enables the externalization of much that would otherwise remain inside us. Language is pushed aside, the sonic units of speech to be replaced by the never-before-heard yet still known.
What does it mean to be undead? And how does that meaning differ from being alive? Focusing on George A. Romero’s numerous zombie films, in addition to Danny Boyle’s 28 Days Later, this paper seeks to address, but certainly not answer, those two complex questions. In so doing, my central claim is this: Zombies might be considered code – unforgiving, digital bodies that emerge, network, and swarm; render their previous instantiations obsolete; and, perhaps most importantly, highlight the interdependency between information and the material. Not only do films such as Land of the Dead and 28 Days Later give us new and improved zombies, who now move quickly and learn by tutelage, imitation, and social referencing, but they also capture coding practices that simultaneously resist and reify the actual socio-political and economic conditions of decentralized, capitalist markets. That is, zombies, as code, are discursive formations laid bare. While they do not speak or write, per se, they no doubt connect and compile. Too, they spread; they replicate. As viral threats, zombies both possess bodies and consume bodies. The primary aim of this paper, then, is to analyze and unpack the complex intermediations between zombies, consumption, (re)inscription, and possessive individualism. And by drawing upon the work of Richard Doyle, N. Katherine Hayles, and Steven Shaviro, I ultimately argue that a better understanding of the undead, or the digital that remains alive after the analog, enriches articulations of how code functions in societies of control.
The American artist Joseph Cornell (1903–1972) is best known for his assemblage boxes that range in subject from forgotten ballerinas to cutting-edge discoveries in astronomy. In the last decade of his life, however, he abandoned the box format in favor of flat collages made from materials cut from magazines and books. Many of these refer to aspects of the cosmos by using coded materials—obscure charts clipped from children’s books about math and science, pages taken from Arizona Highways, postage stamps, and pictures appropriated from old issues of Scientific American.
This paper will examine the ways in which Cornell used mathematical and scientific diagrams as a coded poetic language in his late collages. Special attention will be paid to his interest in projective geometry, the fourth dimension, and the curvature of space, and to their expression by means of unexpected images taken from popular culture. As we shall see, the visual codes used in the late collages have their roots in Cornell’s early engagement with the science of astronomy and the metaphors of the cosmic space and time.
While in the early years of the century the highly popular spatial fourth dimension had stimulated a wide variety of modern artists, after 1919 the popularization of Einstein redefined the fourth dimension as time in the four-dimensional space-time continuum of Relativity Theory. For American Cubist painter Stuart Davis in the 1930s-1950s the model of the space-time continuum as a “block universe” offered vital support for his goal of a painting style free of the accidents of individual perception—i.e., what he came to term “The Amazing Continu-ity.” Duchamp, who served with Davis as a consulting editor for the journal Trans/formation in 1950-52, had been the early 20th-century artist most deeply engaged with four-dimensional geometry. During subsequent decades, however, his motion-oriented works came to be interpreted in relation to Einsteinian space-time, and he waited until 1966 to release his extensive 1910s musings on four-dimensional geometry in his White Box. Smithson in an unpublished 1962 essay railed against Einstein and Duchamp’s kinetic art, but he subsequently discovered the White Box notes and proceeded to explore the spatial fourth dimension, including its longstanding association with symmetry, mirrors, and spirals—all themes in Duchamp’s works as well. Although these three artists are rarely discussed together, their juxtaposition points up the important contribution of mathematics to mid-20th century art and reveals previously unsuspected commonalities among the three.
We live at a time when the body has gone post-human, with BODY WORLDS taking cadaver research into the business of plastics while genetics and neuroscience monopolize the attention of medical students, leaving anatomy classes to be taught online. As our scientists focus less on haptic, hands-on dissection, our youth-oriented culture pays ever-higher prices for both surgical and non-surgical enhancement of body parts. Image more than meat becomes the basis of constructing new bodies, whether on screen or off. For the artist engaging in screen culture, the advent of gaming, SECOND LIFE and the dominance of “theory” in art schools signal a turn to the social, political and ethical profiling of bodies in virtual space. The artist’s eye, in other words, views the body as cultural code, a storehouse of information that can be transferred between biological and non-biological intelligence systems.
In this paper, I will discuss the founding of The George Greenstein Institute for the Advancement of Somatic Arts and Science, a new educational institute dedicated to promoting in depth conversation and exploration of the body as cultural code, or to be apt, a system of codes, especially, the codes that signify the evolving body in a changing biosphere. The goal of the Institute is to inspire and foster through, whole-brain, trans-disciplinary education in the fields of somatics, biotechnology and neuro-aesthetics, the non-suffering of all sentient beings and to encourage pioneering study of alternative, life-support systems of human wellness and energy flow.
In her essay “Plays” Gertrude Stein takes up what she calls the problem of emotional syncopation at the theater: “your emotion as a member of the audience is never going on at the same time as the action of the play.” Stein describes some kinds of drama that do not have this problem, including melodrama (especially plays about telegraph operators) and her own landscape plays. But she worries: “And is it a mistake that that is what theatre is or is it not.” This paper locates Stein’s meditations on the audiovisual and emotional dynamics of an audience at a play in the environment of the emergence of those technologies for decomposing and recomposing images, the scanning technologies that are the basis for live television. I propose that television be understood as it offers the technical means for giving its audience the strangely intimate experiences of face and voice, the primary physiological mediums of affective communication. This paper will read Stein’s essay, alongside her other writing of the early- and mid-thirties, as it addresses the nascent transformation of theatrical experience, linking it to a longer tradition of writing on theater, emotion, and politics (especially in Rousseau).
At the turn of the twenty-first century, stories of conjoined twins undergoing multiple surgeries for separation surfaced in conjunction with advances in technologies that make such separation more possible. The story of Carl and Clarence Aguirre from the Philippines, for example, stayed in the headlines from late 2003 through the middle of 2004 when doctors successfully completed the final separation surgery. Darin Strauss’s novel Chang and Eng (2000) features the original Siamese twins in a fictionalized account of their lives in North Carolina. Karen Tei Yamashita’s story “Siamese Twins and Mongoloids” (2000) features fictional Asian American twins in a satire of Asian American cultural politics. The Polish brothers’ film Twin Falls Idaho (1999) presents conjoined twins Blake and Francis Falls as musical performers. What is it about this fin-de-siecle moment that makes the story of conjoined twins so fascinating for newsmakers, artists, and audiences alike?
This paper explores anxieties of the self’s sovereignty in the figure of the conjoined twin and medical technology’s role in reshaping bodies to assuage such anxieties. While the lives of conjoined twins are difficult for physical as well as social reasons, the preoccupation of an American imaginary with these twinned bodies’ cleaving suggests a need to resolve a binary system of selfhood into a unitary one. The twinned self troubles narratives of individual dreams and romance; most of the fictional representations of conjoined twins speculate on the mechanics and prurience of sex lives for such twins.
Code, the action language of technology, can be interpreted as an ideology. If, historically and traditionally, technological progress has been routed in heterosexist discourse, are all bodies bound to heterosexual control and ideology? If not, how do marginalized bodies react to/resist these power paradigms and reconfigure them? Or, is there a subcultural technology—a subcultural code—that offers empowering, subversive communicative structures and processes to all bodies, producing a freedom that exists as fact?
Throughout the history of linguistics, a history of homosexually coded “languages” exists: from Polari in the UK to Gail and IsiNgqumo in South Africa. They are action languages that help create queer formations and identities just as computer code is an action language that forms what it is running. Importantly, however, these gay languages implement the closet: although they create community, they also hide identities from the public, just as computer code operates ideologically as false freedom.
This paper will examine the potential for the formation of a queer computing “anti-language.” If, as Katherine Hayles writes, “language alone is no longer the distinctive characteristic of technologically developed societies; rather, it is language plus code,” how can the queer community learn from the coded languages of its past to create a new technological “anti-language”? Attempts to formulate a queer programming code implicates the urgency in carving out a queer freedom in hi-tech culture and providing the queer community with discursive/practical tools for activism, resistance, and communication.
The ‘Bisociation Engine’ (bEngine) is a collaborative, interdisciplinary project that attempts to computationally model specific aspects of human creativity. Rather than employing top-down processes such as propositional logic, the bEngine takes a generative approach that begins with the recognition of micro-level semantic, linguistic & structural associations between lexical items, then recursively assembles these into larger units of meaning. Arthur Koestler first coined the term ‘bisociation’ to distinguish between ‘routine thinking’ which occurs on a single plane, and ‘the creative act’ which, he states, “always operates on more than one plane.” This paper presents a range of attempts by the authors to reverse-engineer ‘bisociative’ creative thought in software. More specifically it addresses difficulties and potential solutions for translating these complex parallel processes into code.
A particular focus of the ‘Bisociation Engine’ project thus far has been the human capacity for association, specifically between disparate areas of experience. An initial output of this research is the generative installation entitled ‘the Architecture of Association’ (AoA) [prototype sketch here: http://mrl.nyu.edu/~dhowe/video/aoa.swf]. Implemented as a sculptural grouping of 100+ suspended LCD screens, the AoA draws associative links between elements of a large multimedia database containing text, images, and video from the history of computation. bEngine algorithms are employed to ‘intelligently’ recognize semantic, linguistic and structural relationships between these database elements in real-time. These relationships (and their relative strengths) are used to situate media items in physical/architectural space, creating an evolving recombinant collage rich in associative potential.
MediaWiki, the software platform that has enabled the Wikipedia project is Free Software licensed under the GPL’s copyleft provisions. The principles behind the Free Software movement have animated many of the concerns of the Wikimedia Foundation and its founder, Jimbo Wales.
Not much has been written about the role of wikipedia in creating dialogue between different language communities. This process of intermediation has, like the overall project, grown dramatically in the last three years. Currently the Wikipedia project covers some 250 different languages.
This presentation will examine how the Wikipedia has affected the interaction between different language communities and will address the degree to which open access models such as Wikipedia have facilitated or resisted neo-liberal forms of globalization.
These panels will examine the contemporary state of Gaia theory discourse from two primary angles. The first panel will investigate theoretical developments in Gaian science: its links to systems science, its status in the mainstream geoscientific academy, and its contributions to the climate-change debate. The second panel will put Gaia theory into wider cultural perspective, by drawing out its rhetorical resources in several millennia of Western literature and science, and by marking its incentive for creative artistic responses. We hope to underscore the vitality of Gaian science—the challenges it poses and encounters at the cutting edge of our complex posthuman nonmodernity.
From its initial description as a global homeostatic control system to its elaboration in several generations of computer models called Daisyworld, James Lovelock’s Gaia concept largely adheres to the control-engineering and computer-science paradigm of first-order cybernetics. Here I will focus on the presentations of Gaia theory by Lovelock’s stateside collaborator Lynn Margulis. With the early embrace of the Gaia hypothesis by Stewart Brand and the CoEvolution Quarterly, American Gaia discourse was cultivated in a “counter-cultural” milieu in contact with Heinz von Foerster’s Biological Computer Lab, just as he was promoting an important shift in systems thinking to second-order cybernetics, for which circular processes such as feedback cycles are not merely instrumental for the system, but constitutive of the system. Concurrently within that same milieu, Humberto Maturana and Francisco Varela were publishing their first delineations of autopoiesis as a form of constitutive self-referential recursion in biological systems. Erich Jantsch’s The Self-Organizing Universe, published in 1980, synthesized the concept of autopoiesis with the work of both Ilya Prigogine and Lynn Margulis. Starting with their 1986 volume Microcosmos and continuing in the ’90s with What is Life? and Symbiotic Planet, Margulis and co-author Dorion Sagan endorse this configuration when they present Gaia as “the autopoietic planet.” This paper will examine and evaluate Margulis’s own extensions of autopoiesis to Gaia theory.
The Gaia Hypothesis, Gaia Theory and Gaian Science are widely accepted by the general public and by many biologists and some atmospheric scientists. However, a review of recent earth science textbooks and a computerized whole text search of journal articles shows that Gaian concepts are very rarely accepted in the physical sciences: geochemistry, chemical oceanography, and historical geology. The mythological name “Gaia” may be the main problem in the acceptance in the physical sciences, ironically, even in Geology, which is named for her. In addition, in some cases the physical scientists have simply not noticed any feedback in the involvement of life in controlling the earth’s surface, for example my teachers Alex K. Baird, who worked with Lovelock on the Mars landings, and Robert C. Reynolds who proved the constancy of sea salinity. Other problems for earth scientists include the style of modeling used. Gaian science has allied itself with forms of system science which are analogical not deterministic, especially Daisyworld. Earth scientists demand models reproduce real earth behavior and realistic parameters. Perhaps most of all, there are many useful, simple, physical models which do not require Gaia, especially the fact that material recycling systems are very stable, for example, the sodium cycle. However, these recycling models are in no way a refutation of Gaian Science, because material recycling may well be important parts of large homeostatic mechanisms.
James Lovelock’s latest work, The Revenge of Gaia, has galvanized scientific and environmental communities with its prognosis of an impending ecological-climatic crisis. Lovelock argues that unchecked “global heating” (as he calls it) will kill billions of people and cause the collapse of civilization. While Lovelock’s forecast of a coming Hell-realm is at the extreme end of climate change predictions, Gaian (or Earth system science) concepts of nonlinearity, dangerous positive feedback loops, and irreversible tipping points are firmly entrenched in climate change science and discourse. I argue that such systemic thinking has contributed to framing climate change as “the problem” of our time. I take issue with this framing: it detracts attention from other facets of Earth’s ecological predicament; and it confines proffered solutions to the technical realm, by powerfully insinuating that the needed approaches are those which directly solve “the problem.” I submit that the tenor of much climate change discourse evinces features of what might be called the “apocalyptic paradigm”: the apocalypse is always scheduled to arrive in the future; it is pictured as a single monumental catastrophe; and what is at stake is nothing less ultimate than survival and the order of things. I argue that the apocalyptic paradigm is wrong-headed, because it averts attention from the condition of the biosphere in the present, and it conceals the fact that, more often than not, ecological catastrophes are neither cataclysmic in form nor (necessarily) a threat to human survival.
In Ulysses James Joyce engages in a literary investigation of death in the hope of escaping the double bind of passive suffering created by the two opposed existential responses to death, namely metaphysics and empiricism. The metaphysical response to “the unspeakability” of death yokes the finality of life to a cosmology that depends on a teleological system of good and evil. The empirical discourse opposes the metaphysical (or theological) idea of death and posits death as a unique event defined by its variable contexts and manifold causes. In Joyce’s view, this metaphysical and empirical binary form a linguistic code that relegates the individual to an existence of passive suffering. It is only by creating a new linguistic code for existence can the individual “work beyond suffering” to a new understanding of death.
My paper charts Joyce’s development of a new linguistic code of sacred “roguewords” that can create an active response to death independent from the “monkwords” of organized religion or the “useless words” of absolute materiality. I utilize the writings of Emmanuel Levinas, Jean-Luc Marion, and Roman Jakobson to analyze how Joyce creates a “saturated phenomena” that rejects the dominance subjectivity and thus leave open a discursive space receptive to the building of new linguistic codes. Joyce uses linguistic experimentation to create a “saturated” text capable of expressing a radical openness to the uncertainty of death that allows it a sacred, but not divine, position.
The transoceanic turn of European culture in the sixteenth century promoted a new sense of the word “compass,” which changed from a verb meaning to measure or enclose to a noun referring to the instrument that points toward magnetic north. As a tool for navigating the deep sea, the mariner’s compass represented a new kind of code that relied on partial knowledge of a partially knowable world. Unlike the complete divine plan to which mariners appealed in Biblical and medieval sea-stories from Jonah to Chaucer’s “Man of Law’s Tale,” the compass provided early modern mariners with direction but not location, thus destabilizing their ways of de-coding their world. In The Faerie Queene (1596), Spenser uses the word “compass” to explore the challenges of orientation that oceanic navigation entailed. Spenser’s multiple compasses mean many things, including to accomplish, to inscribe a full circle, to measure, and the mariner’s compass. The poem starkly juxtaposes two strains of compass-meaning: a semi-theological sense that emphasizes completion, and the more modern reference to the mariner’s instrument. These meanings compete as ways to counter oceanic disorientation; fullness of meaning and knowledge of direction construct rival ways of ordering Faery Land. Orientation may be visionary (the Graces on Mount Acidale dance “in compasse stemme,” 18.104.22.168) or it may emerge from technical navigation. The compass-code thus allows Spenser to negotiate the relationship between emerging empirical and ancient theological understandings of the world.
In 1961 a computer at Bell Telephone Laboratories programmed by punch cards sang “Daisy Bell” inspiring a visiting Arthur C. Clarke to suggest its use in the now famous “death scene” of Hal in Kubrick’s 2001: A Space Odyssey. Significantly that very same IBM, in an effort to display its capacity for synthesized human speech, recited excerpts from the soliloquy of an even more infamous and arguably more impressive artificial intelligence machine—Hamlet. So convincing a performance of consciousness fools the best of us perhaps even into believing that we are witnessing “The Invention of the Human”. Heralded as an ideal humanist model of subjectivity, Hamlet sans digitized voice and blinking LEDs has kept us asking, “Who’s there?” One deceptively simple question begins the play containing arguably the most over-analyzed, overwrought and overdetermined character in literary history. But what keeps us responding to the “knock, knock” on the door of Hamlet’s psychology? Perhaps that is really “the question”. One possible answer is that Hamlet’s intelligence is an excessively manufactured profusion of language that takes over the audience, the plot, and the very code written for him. Just like Hamlet’s “artificial” counterpart “thinking” is a malfunction that creates disruptive autonomy. The struggle to find the ghost in the machine of Shakespeare’s most lauded persona continues to neglect one crucial problem—Hamlet fails to be a character. I will suggest that Hamlet is instead constituted by a complex and labyrinthine tangle of “thought experiments” that when executed runs a counter program of identity.
Any discussion of code and literature less concerned with the word’s more technical or scientific valences would perhaps find its most general definition—“A system or collection of rules or regulations on any subject” (OED)—more applicable in describing the generic, narrative, or poetic “codes” that regulate the customs of literary practice. This panel, however, seeks to coordinate between a broadly applicable use of “code” (as a system of rules) and its other more technical meanings, especially in literature that is not decidedly technical or scientific. The logic of this attempted coordination will test the idea that code’s more limited and specific uses in relation to, for example, cybernetics, computers, and military secrecy, might assist in understanding how codes operate in literature written in eras before code became popularized as a technically-oriented term.
It is sometimes forgotten that avant gardism, as a concept that describes the insular collaboration that helped produce “high” modernist innovation, was originally a militaristic phrase. Until it was adopted by nineteenth century French artists, its sole meaning was very specific and simple: it named “the foremost part of an army” (OED). The term was appropriate for artists because they perhaps perceived in the military avant garde a unit representing something elite, cohesive, passionate, and perhaps even a little insane. Their deaths are more certain, yet glory awaits those who are most willing to face the enemy. For avant garde artists, the enemy is convention, tradition, mass culture, the bourgeoisie; the battlecry is “to make it new.” Bearing the phrase’s martial past in mind, it should be considered no coincidence that so many avant garde movements sympathized with the politics of military fascism.
One may suggest, then, that the avant garde follows a military-like code—if “code” is defined broadly here as a set of aggressive practices and poses. Yet, there is another important and less discussed aspect of how avant gardism might be “coded.” A code also defines a system of signals and substitutions used to protect secret communications. Militaries, again, have long had the most use for this kind of code. I will argue that modernist avant gardism (using F.T. Marinetti’s work as my primary example) literally encodes the “high” art it produces in order to isolate itself from contamination, and more importantly, that this model for creating art is essentially militaristic.
Prior to World War I, artists and individuals with political and social concerns were engaged in the formulation of movements. Like-minded people came together to discuss ideas. Fruitful discourse often led to better articulated ideas, ideas which were then expressed in new creative works. But the invocation “to make it new” may have been not the result of a collective surge in creative powers but the last available maneuver for groups entrenched in a stifling atmosphere of competition. Incensed by the dynamism of Marinetti and Futurism, British Vorticists sought a dynamism of their own with which to respond to, critique, and combat the Futurists.
In the race that became a war to discover and define (and then defend) the new century’s art form, the power players were those who aggressively marketed their own code. If the Vorticists came late to this recognition, they nevertheless came in time to proclaim the arrival of not just the new, but the better new. In order to advertise the qualities of this improvement on the new, the Vorticist movement adopted the militant confrontational aesthetics of Futurism and added a stance of protective secrecy more common to non-confrontational outfits. With the vortex as their defining symbol, the Vorticists express the immovability of energy when it self- consciously seeks to be both movement and the cause of movement. I will argue that at the limit of expression, a desire for the new can only result in this kind of frenzied stasis.
Scholars have noted the cultivation of a math aesthetic in the work of modernist male writers such as Ezra Pound, W.B. Yeats, James Joyce, and Samuel Beckett. Alongside this admiration for mathematics is an equally manifest fear of numbers, emerging particularly from the margins of literary modernism. My paper focuses on Elmer Rice’s The Adding Machine (1923) and Yevgeny Zamyatin’s science fiction novel We (1924) as illustrative of a kind of ‘math anxiety’ taking form in the 1920s and 30s. I draw on a range of extra-literary materials, including popular science and pulp magazines, as well as self-help manuals, to trace these popular perceptions of the growing ‘mathematization’ of culture in the early 20th century. Ultimately, I aim to contrast this (masculine) fear of numbers with the (largely overlooked) critical responses of modernist women writers such as Gertrude Stein, H.D., Marianne Moore and Edna St. Vincent Millay to various aspects of mathematics.
In this panel we will challenge the idea that codes are merely representational. Rather than imagine a stable origin from which code departs (and to which it can return) we will offer instead a view of code as a continuation of other biopolitical processes (discipline and docility, surveillance, control, surplus-value extraction), such that “encodings” or “decodings” might instead be described as ruptures, threshold crossings, bifurcations, dynamic shifts in the movements of matter or life itself. This perspective bypasses questions of meaning and instead focuses on what code does, what arrangements it catalyzes, facilitates, or makes possible, what capacities it opens up, what uncertainties it invests in?
In this paper I will explore how coded digital audio files participate in the extraction of surplus value from life and other organizations of matter. While it is true that digital audio files, like digital files in general, are comprised of zeroes and ones, the production of digital audio files has brought together a particular concatenation of information, sound, bodily matter, techniques of measurement, and financial investment—dynamic and turbulent, yet subtle and detailed. Despite its decidely unclear ends, this concatenation is not abstract. It has developed materially, and it is from this materiality that I will approach what I see as the emerging cultural economy of code, which relies less on meaning and interpretation than it does on actual physical transformation. Specifically I will focus on the physical transformation that occurs when analog signals are digitized (and vice versa), exploring how these transformations participate in processes of measure that are essential to extracting surplus value from matter.
In humanities-based critiques of digital media, three modes of approach dominate: representation, sociality, and epistemologies of bodies at/in/across the screen. Constructions of integrated digital networking as some sort of descendent of film, the novel, the microscope, the café, the mall, and others proliferate (simulation, consumption, ideology). Furthermore, literary approaches to digitality stress textuality and poetics, but again in terms of a representational schema bound most stringently by genre theories (“Is New Media ‘New’?” or The Language of New Media). The oldest strain, the cyborg, has become representations of coded bodies wherein the status of the human persists as a given.
These genealogies operate from a shared threshold characterized alternatively as surfaces and folds, simulation and the real, or subjectivities and objects—all attempts to explain the materiality of digital “skin.” Skin is the perceptual materiality engaged by the user: screen, game scenario, simply, the wysiwyg of the interface. Critical work addressing the skin tends to pose representational, social, and interactive questions about content-in-format. In this mode, code is read as “normal view” against the “hidden” source code that “reveals” representational schema—numerically simulated, transparent and transportable If skins are revelatory, we might think about the material “dissimulation” of code, non-representational “text,” and the irrelevance of manifest content and form or content-in-form. I will offer alternative theoretical approaches to digital media through anatomies of cross-medial, code-based alternative reality gaming and its corporate clone, viral marketing, both of which, as second-level movements, subtend other more regularized digital and medial productions (Halo, Lost, Heroes, Lonelygirl15, Windows Vista).
The field of bioethics has understood itself as a method for bringing medical treatment and research in line with shared societal values, including values about the extent and limits of individual rights and autonomy over one’s own body, health, and life as well as the place of individual needs vis-à-vis the greater good of society. When bioethics enters the terrain of psychiatric treatment, it confronts the contradictions of working with a population that is de-facto understood (in terms of being designated “mentally ill”) as incapable of recognizing its own needs and acting to its own benefit; psychiatric bioethics grapples with making health choices for those deemed lacking adequate decisional capacities. Critics of bioethical enterprises have taken up bioethics as a form of ideology that supports normative conceptions of the body, health, and “the good life.” Critics have furthermore examined bioethics as embedded within a capitalist context in which access to health resources and the life chances they make possible are unevenly distributed and in which great opportunities for profit-making exist in research and development programs that depend upon the participation of human subjects in sometimes risky experiments and trials.
In this paper, I suggest that bioethics be taken up as neither an ethical nor an ideological project, but rather a material practice of coding. By this I intend the notion that professional ethics exists as a coding operation that opens up, closes down, or redirects channels through which research programs, subjects/patients, funding, and treatment options flow. This is to think of bioethics as a technique of affective coding, in which the “affective” signals capacitive to affect or be affected. I argue that the affective coding of bioethics enacts processes of informationalization, in which data about things such as health risks and population targets can be set in relation to legal constraints and capital investment. I pursue these questions in terms of bioethical debates around homelessness and mental health treatment and research. I argue that this case makes clear how bioethics exists neither to ensure that practices match social values, nor to interpellate individuals into ideological systems, but rather to arrange, relate and distribute information that is best thought of, to follow Ian Hacking, in terms of representation/intervention. Bioethics functions to draw a population often understood as outside the proper bounds of humanity into a milieu of ethical intervention. In other words, bioethical coding serves to informationalize, or in-form, homelessness as a population available for various techniques of intervention, including psychiatric research, treatment and management.
Continuing with last year’s successful sessions “SLSA Creative Writers Read,” in two affiliated sessions novelists, short story writers, poets, and artists of SLSA would like to read from and discuss their works. As for last year, we hope to initiate a discussion about how the creation of new literature and art can help people appreciate the complex relationship between literature and science.
Wayne Miller will read from his work in process, The Bog Monster of Booker Creek, with themes of scientific vs. experiential knowledge, aging and remembering, and the angst of feeling out of place (on earth and elsewhere)
Sue Hagedorn and Cheryl Ruggiero will read from current work set in their science fiction Catalyst Trilogy universe.
Janine DeBaise will explore, through prose and poetry, her connection to place, most specifically rivers, lakes and marshes.
This “seminar” will focus on the nexus of life, movement, technics and phenomenology in relation to contemporary technoscience. The discussion will be guided by and will seek to address from different perspectives the tension between the macroorganization of the (human) organism and the pressure to de-organize that comes with increasing technical capacities for manipulation at the molecular and genetic levels. Each speaker has agreed to “represent” a different position on the spectrum of this “tension” and the discussion will aim to bring out important interfaces and divergences among various contemporary positions on the “integrality” of (human) life. Bodies of work to be addressed include recent discussions of biopolitics in the wake of Foucault and Agamben (Toni Negri, Eric Alliez), work on the nexus of phenomenology and life (Husserl, Jan Patocka, Renaud Barbaras), and contemporary developmental systems theory (Susan Oyama, Mary Jane West-Eberhard, Griffiths and Gray). Audience members are encouraged to add to this corpus. Speakers will briefly present their papers/research highlights (5 minutes each) so that the majority of the session can be devoted to a debate among them and a wider discussion involving the audience. The point of the seminar is to work through some crucial concepts in the philosophy of life and its contemporary interface with technics.
This paper will examine the concept of vitalism in the work of Gilles Deleuze. By focusing on Deleuze’s engagement with Duns Scotus and Spinoza, a concept of vitalism will be seen to be inseparable from a concept of immanence. In contrast to readings of Deleuze which emphasize the role of biology, this paper will argue for an understanding of vitalism in Deleuze as deeply informed by theology.
This presentation will explore the integrity of (human) life from the phenomenological perspective, and specifically through the attempt of Czech phenomenologist, Jan Patocka, to develop a phenomenology of living movement that would treat life as an “existential” structure of human Dasein. The presentation will focus on the compatibility of this effort with contemporary technoscience, and in particular, with those recent developments that presage the molecular dissection of embodied human agency.
Developmental Systems Theory or DST allows us to think both the autonomy and embeddedness of what I call “bodies politic” as a way of thinking humans as occupying the intersection of biological and cultural domains. Bodies politic are not mere input / output machines passively patterned by their environment (that way lies a discredited social constructivism) or passively programmed by their genes (an equally discredited genetic determinism). To help us here, I turn to the notions of developmental plasticity and environmental co-constitution found in Mary Jane West-Eberhard’s Developmental Plasticity and Evolution (Oxford, 2003). That the development of bodies politic is “plastic” and “co-constituted” with its environment means that it is not the simple working out of a disciplinary program (just as organisms are not workings out of a genetic program), but involves a range of response capacities depending on the developing system’s exposure to different environmental factors, just as those responses feed back to change the environment. The notion of developmental plasticity displaces gene-centric notions of programmed development just as organism-environment co-constitution displaces notions of gene-centric natural selection in favor of a notion of multiple levels of selection. We cannot enter the details of the controversy surrounding the notion of multiple levels of selection here, but it seems a most interesting way of thinking of evolution would be to think the “modular sub-units” involved in phenotypic plasticity proposed by West-Eberhard 2003 (above the gene, but below the level of the organism) along with the notion of the reliable repetition of co-constructed organism-environment niches or “life-cycles” (including parental and social environments, thus above the level of the organism) proposed by adherents of Developmental Systems Theory or DST (Oyama 2000; Oyama, Griffiths, Gray 2001). This would take us below and above the level of the organism, below to the embodied unconscious physiology and above to eco-social embeddedness, just as the study of bodies politic takes us below and above the level of the subject. While we cannot enter the details of the relation of developmental evolutionary biology (West-Eberhard’s term, which she prefers to the usual evolutionary developmental biology, or “evo-devo”) and DST (see Robert, Hall, and Olson 2001; Griffiths and Gray 2005), but can at least note that the key notion of DST is that the unit of selection in evolution is the life cycle of the organism in its eco-social embeddedness; in other words, DST’s notion of the life cycle is also a thought of the assemblage, a biological complement to the embeddedness of situated cognition (Griffiths and Stotz 2000).
The British documentary Animal Tragic (2003) presents the stories of several humans who are in the process of transforming themselves to appear more like ‘animals’ through a variety of surgical procedures, multiple piercings, tattoos and body art, and the use of prosthetic devices. In the documentary, Leopard Man (a self-declared recluse from Scotland), Lizard Man (body artist, Los Angeles), Katzen (performer, Texas) and CatMan (computer programmer, San Diego) discuss their various motivations for these radical forms of shape shifting. Using excerpts from Animal Tragic, I will analyze the extraordinary metamorphoses undergone by Lizard Man, Katzen and CatMan. For example, Lizard Man discusses his transformation in terms of the aesthetic appeal of reptiles, while Katzen desires to incorporate a more feline ‘sensuality,’ and CatMan (who is of Native American descent) understands his more extreme process of ‘becoming-cat’ (which involves the daily insertion of 40 whiskers, cat dentures, application of an electronic prosthetic tail, and, more recently, a fur transplant) in terms of achieving a closer connection to his totem animal and becoming a hybrid human-cat being. These various perspectives and experiences will be examined with recourse to postmodern theories of human-animal relations, in a way that introduces notions of animality into existing discourses about human embodiment.
One of the more famous recent attempts to extend and refine Aristotle’s concept of the political animal is Martha Nussbaum’s in the recently published Frontiers of Justice, which develops Aristotle’s concept of ”flourishing” into what she calls a “capabilities approach” to the ethical status of non-human animals. Interesting enough, Nussbaum attempts to zero in on exactly the same issue that Derrida focuses on in his work on “the question of the animal”—namely, how the shared finitude of humans and animals (rather than rationality or even sentience) forms the basis for a shared ethical relation.
There are several problems with Nussbaum’s extension, however. First, as Cora Diamond’s work helps us to see, Nussbaum mistakes the “difficulty of philosophy” (a merely propositional difficulty) for “the difficulty of reality” (what Stanley Cavell would connote by the term “skepticism”)—a difficulty that is evaded or “deflected” by thinking that it can be solved by ever more technical syllogistic maneuverings. Instead, the real challenge (to use Cavell’s phrase) is facing the implications for philosophy of what it means to “let our knowledge come to an end.” Diamond finds that challenge bodied forth in J.M. Coetzee’s The Lives of Animals, specifically within that volume in the difference between how the main character, Elizabeth Costello, responds to the animal “holocaust” going on around us and how the responses by the professional philosophers at the end of the book attempt (unwittingly) to domesticate it. (This difference points, in turn, to a larger one I hope to pursue: the difference between literature and philosophy for confronting our relations to non-human animals, and how that difference is handled in the figures discussed here.)
This limitation in Nussbaum’s work (what philosophy is) shows up quite conspicuously (and this is my second point) in her idea of what a concept is, and is also manifest in the tacitly presupposed subject of knowledge that her theory methodologically presumes and reproduces. To put it another way, Nussbaum thematically (or constatively, if you like) argues that reason and rationality are not to be seen as instituting an ontological and finally ethical divide between human and non-human animals, but methodologically (or performatively, if you like) her work reproduces this very ideal. What is needed here is therefore a rigorous confrontation with the relationship between “concepts” and language—a relationship that has been treated much more attentively within liberalism and analytical philosophy by a host of figures, including Rorty, Fish, Diamond, and Davidson, just to name a few, and without by figures such as Derrida.
In the absence of confronting this problem, there is no way, however well intentioned one’s thinking may be, to avoid reinstituting (to borrow now from the editors’s proposal) the active/human and passive/animal doublet, and thus sustaining “a collective ‘we’ in the name of whom violence is exercised.” It is this doublet, of course, that is unsettled (within the analytical tradition) by Cavell and Diamond’s confrontation with skepticism and (within the poststructuralist philosophy) by Derrida’s contention that the human suffers a radical passivity in the face of the exteriority and trace-structure of language itself, a passivity which no “concept” can master.
The setting up of a Royal Commission to examine the question of the use of live animals in experiments in June 1875, and the subsequent publication of that report and the ratification of the Cruelty to Animals Act in 1876, marked an important shift in the ways cross-species relations were encoded in England. The establishment of a scheme of registration, licensing and inspection was an attempt to systematise the treatment of animals in science, whilst also removing science from the public view. Public vivisection lectures were banned, as were the use of animal experiments in medical lectures, except to illustrate the use of anaesthetic. Codes of conduct towards the animal in the laboratory were reconfigured. At the same time stereotypical codes of class, gender, profession, human and animal identity, amongst others, underpinned the debate over the rights and wrongs of vivisection and shaped different targets of attack. This was a prominent feature of the vivisection literature in public journals, which was especially extensive in the decade from 1875. These codes of identity were themselves, in principle at least, strongly determinant of social codes of conduct. This paper examines the phenomenon of vivisection at this period as a particular site of inter- and intra-species codes of conduct in transition. Drawing on government papers, as well as scientific illustration, painting, and literature, from the key period of the mid-1870s to the mid-1880s, it examines vivisection in terms of postures of interaction and problems of conduct. This is slightly different from standard historical accounts of vivisection which tend to examine it in terms of the conflicts of social history. In this instance, the practice can also be seen as illustrating the tension between linguistic and non-linguistic codes in the formation of human-animal relations.
Much has been made of the hegemony of the visual paradigm in Western Culture and its culpability in engendering the ‘enframed’ nature of experience articulated by Martin Heidegger. The work of David Michael Levin, Martin Jay and others has been important in raising questions concerning a pervasive ocularcentrism. There are those like Stephen Houlgate and Levin himself who suggest that in fact it isn’t the visual that is at fault, but the model of a particular kind of stultifying gaze, that is by no means the only possible paradigm, the only possible way of ‘looking.’
This paper highlights, in stepping up to champion Wittgenstein’s famous call to arms, ‘Look! Don¹t think,’ the contributions of Zippy the Pinhead. The comic character written by Bill Griffith paradoxically embraces the ‘box’ in which he finds himself, while at the same time breaking it wide open. Zippy asks us to see what we have, due to our short-handed approach to experience, habitually ignored, and to step outside of the boxes of our expectations. The comic character brings the columns of our eithers and ors into high relief by pointing to the comic as one example of encoded experience. When Zippy argues with the opposing forces of the universe, we stop in our tracks, as when we confront Heidegger’s very particular use of language. Griffith’s ‘meta-comic’ undoes the nature of the comic in general, and we, as readers, can’t help but become aware of and succumb to this disruption of our enframed everydayness.
Comic books, invented in New York City in the 1930s, are geometrically and tectonically structured adaptations of bedsheet newspaper comic strips to the format and proportions of newsstand magazines. Will Eisner, who created the American cooperative system of comic book story creation (combining the creativity of writer, penciller, inker and letter in a shop or small-batch production system), also coined the term for an published the first original work described as a graphic novel (A Contract with God, New York, 1978). The geometric code of panels (the sequential frames of comics, as explored by McCloud in Understanding Comics) as spatial constructs and architectonic fields for human feelings can best be understood through Eisner’s work. In the comic book pages of The Spirit, circulated through American newspapers in the 1940s, Eisner created the splash page and the entryway of menace, danger, and sexual transgression in the small geometies of stenciled glass, transoms and cropped and angled doorways. In his finest graphic novel, To the Heart of the Storm (1992), reversing views through doorways chart the discovery and rejection of a Jewish suitor of a German-American girl by her family just before World War II. The syntagms of Eisners doorways create the paradigms of expressive panels and the language of metapanels that was key to the development of the modern graphic novel.
After the atomic bomb detonated over Japan in 1945, the world grappled to understand the significance of the event and its ramifications…what was the impact the nuclear bomb would have upon human life in a post-nuclear world? The A-bomb, still shrouded in military secrecy, existed as a looming question mark to be feared within the minds of world citizens. Cinema and comics responded to this anxiety and a new genre emerged: atomic science fiction, where radiation and nuclear fallout yield monsters and genetic mutants: giant ants, godzillas, shrinking men, sandmen, spidermen, and green hulks. Vis-à-vis media arts, these altered genetic life forms portrayed in popular visual culture since the 1950s function as conceptual precursors to contemporary biological art and transgenic art and research. Such contemporary genetically amplified, hybridized and modified life forms, a.k.a. biological mutants, include Eduardo Kac’s GFP Bunny, and the GFP monkey, Stelarc’s Extra Ear, and Art Orienté Object’s work with transfused Panda blood. We suggest that superheroes and transgenics offer a form of immanent exploration of a post-nuclear world where social decisions are too complex to completely understand, technology too advanced to adequately control, and scales of experience too terrible to directly experience.
From Foucault’s ideas about the political regulation and management of life to Giorgio Agamben’s theories of “bare life” and the state of exception, discussions of biopolitics have tended to avoid basic questions about code and information, about language and life, beyond a Heideggerian framework. Conversely, information theory since Shannon has proved reticent about acknowledging the biopolitical implications of its conceptualization of biological processes. The aim of this panel is to explore the relation between biopolitics and code in a number of different registers, not simply to arrive at biopolitical reading of code but to think the implications of code for the biopolitical. Between code and the politics of life are possibilities for thinking beyond many of the binarisms that continue to crop up in discussions of code, such as rule versus sign, correlation versus system, digital versus analog, and even human versus non-human.
As autopoeisis defines the human body as an organism closed to information, autopoeisis, I will argue, is the basis of the distinction of analog and digital, a distinction which must be rethought in order to address a shift in the governance of capitalist productivity from neo-liberalism to a radical neo-liberalism or a politics of life and death. In addressing autopoeisis in order to rethink the distinction of analog and digital, I begin by following information theory as it moves between physics and biology and then revisit what Humberto Maturana and Francisco Varela, who theorized autopoeisis, referred to as “languaging” as distinct from coding.
What code does (rather than what code is) could be an interesting way to also discover some positive approaches to information other than Shannon theory. What do you think code does? My shortcut is that code delays or defers agency. But given the concept of agency, code in its vulgar formulation drops out of the picture. If we defer or delay agency arbitrarily far, can we simply replace it by the more neutral but much vaster notion of morphogenesis? This brings us back to biopolitics, I believe, but perhaps not in a conventional sense. What we would need to understand, then, is (1) how matter may be not neutral but suffused with value, and (2) how matter articulates novel form in the process of morphogenesis.
As the BioCode proposal for a new taxonomy suggests, code is often conceptualized in biological domains via a logic of the discrete or axiomatic. This is also true of conceptualizations of DNA as code: the tendency is to imagine biodiversity (an ontology of the multiple for biology) on the basis of an axiomatic procedure that is construed as code. I propose to review some of these biological conceptualizations of code with eye to where the axiomatic tendency appears to falter, for it is here that we can begin to think code in terms of problematics or a science of the continuous. Recent work on communication among cells (within organisms or among bacterial cells in biofilms) provides some interesting leads. Ultimately, in biopolitical terms, what is at stake is the biological imagination of the subject. The axiomatic bias in conceptualization of code in biology tends to favour an imagination of the human in terms of “embodied minds.” What we need to explore, however, are encephalized bodies, and an ontology of code grounded in problematics may afford way to think “biocodes” differently.
Starting as it does with a description of the human body as a string of html code ( i.e., <HEAD>[FACE]<BODY>; <BODY>FACE</BODY>), as well as with a double entendre with its reference to “Sign.mud Fraud,” Talan Memmott’s Lexia to Perplexia (2000) presents us from start to finish with an anxious interface that is entirely preoccupied with the shifting status of the subject in the networked environment of new media art. In this work the subject is an anonymous and unknown protagonist resigned to a divided and encoded existence that the computer interface and distributed network have made inevitable. Throughout the work, Memmott explores self and cell.f, offering meditations upon these two poles of identity that are themselves as schizophrenic, fragmentary, and disruptive as the self in question.
This paper explores tensions between the self, traditionally conceived, and the “cell.f,” the term Memmott introduces to describe the contemporary mediated subject as a fragmented member of a collective in Lexia to Perplexia. By analyzing select excerpts of this beautiful, fragmentary work, I consider Memmott’s experimental language and suggest that his innovative use of the interface, as well as his use of portmanteau, punctuation, and “cyberorganization,” are all strategies to enact in language the loss of the subject’s cohesive nature, as well as it is a way to explore both what is left behind and what emerges in the wake of this loss—a haunting portrait of a ruptured and divided subject who mourns the loss of his cohesion
In 1976, Richard Dawkins, in his seminal work The Selfish Gene, compared cultural data to genetic data in order to describe how certain social behaviors might be inherited or passed down through centuries of human development. Neal Stephenson’s Snow Crash utilizes this idea in order to construct a world that is both controlled by technology and infected by cultural viruses. These viruses attack the memes by hacking through the meta-linguistic structure buried within the brain in order to bring about a change in their cultural system. These codes are rules designed to enforce a system of order. In Stephenson’s text, these codes are both wet and dry; they impose order on humans as well as virtual avatars and the world these avatars inhabit. However, as any system of codes would suggest, a change in the rules enforces a new, updated order under the control of those who write the code. This new system makes the mysterious scientific, the ancient modern, the connected fragmented. The individual survives the reprogramming of the unified. Thus, Snow Crash through its use of both wet and dry encoding simultaneously connects all living things and sends them scattering. Stephenson’s desire for a unified encoding of humanity is disrupted by the impulse toward individuation. The collective unconscious is subverted in favor of the individual’s right to program itself. Stephenson’s fear of the collective unconscious symbolized by Dawkin’s memes and the assimilated Sumerian civilization is finally exorcised by his hero’s programming savvy and his struggle for individuation.
In the words of Stuart Hall, “Culture is neither just the process of the unconscious writ large, nor is the unconscious simply the internalization of cultural processes…”
My paper will reformulate the Lacanian equation, $>a, in order to make specific the material conditions of the irresolvable tension Hall describes. For “culture,” I will substitute three specific territories: the “audiovisual” (AV), the “literary” (L), and the “informational” (I), understood as imbricated modalities, or, functions in the mathematical sense, that necessitate rethinking “>”, such that it reflects the material conditions which code and produce an AV-$ and an AV-a, for example. My intention is to “liberate” Lacan’s formula from reduction to psychoanalysis, and make it newly available for cultural criticism. What emerges is a materialist conception of the relation between a technocultural “subject” and its “imagination,” understood not psychoanalytically, but in terms closer to those used by Appadurai, available globally for “social use.”
Hall rephrases the above quotation in terms of the diasporic imagination, not in the sense of the nomadic, but, alluding to Gramsci, as constituted through positionality, and specified through “cultural repertoires of enunciation.” The audiovisual, literary, and informational produce very specific media-forms of enunciation, in which the diasporic imagination replaces symbolic fantasy. My paper will address the media-specific codes and processes by which this substitution occurs.
This panel demonstrates how science fiction studies can work in tandem with science studies to generate insights about contemporary technocultural narrative. Brian Attebery proposes that the methodologies of science fiction studies comprise a “secret decoder ring” that unlock the meaning and value of science fiction as one of the premiere sign systems of modernity. The members of this panel extend this insight to demonstrate how science studies scholars might use this secret decoder ring to critically assess the science fictional underpinnings of twenty-first century stories about national security strategy, gender in the military, and global race relations.
Examines how science fiction narratives code national security strategy. Drawing upon formalist theories of science fiction, Davis discusses the potentials, limits, and political implications of basing homeland defense on creative works of speculation. Davis looks at how the American military and other national and corporate security institutions use science fiction ways of knowing in their practice of “red teaming”—organizing teams of experts to plot and sometimes carry out practice attacks against American institutions. Thriller authors Brad Meltzer and Brad Thor have recently gone public with their involvement in the Department of Homeland Security’s (DHS) Analytic Red Cell Program, for which they plotted out a series of hypothetical terrorist attacks. In addition to its red cell program, the DHS has also written its own science-fictional planning scenarios that it then used as a guide for a recent series of civil defense drills. Davis analyzes the DHS’s catastrophic planning scenarios, government strategy documents, the fiction of authors such as Meltzer and Thor, and related terrorist-themed works to explore how science fiction ways of knowing increasingly inform national security policy.
Examines the media coverage surrounding Private Jessica Lynch, the first U. S. woman POW ever rescued by the U. S. military. Sharp argues that science fiction constitutes an essential area of study for understanding this media frenzy and for mapping how scientific narratives about gender circulate in American culture. From the Bionic Woman to the new Starbuck, American culture has seen an increasing number of technologically adept women who can fight alongside men. At the same time, scientific arguments about the supposedly “uncompetitive” and physically inferior nature of females first codified by Darwin have continued to gain wide circulation in the academy and popular culture. I argue that the Jessica Lynch story brought out these contradictory impulses in American culture, as she was represented as both a tough woman who went down shooting and a damsel in distress. In this sense, I argue that she is actually identical to many of the supposedly progressive female soldiers in American science fiction who are always limited in ways consistent with Darwinian narratives of evolution and gender.
Shows how authors of the African diaspora use the estranging techniques of science fiction to de- and re-code the tales of tomorrow generated by big science, big business, and the mass media. Taken together, the stories spawned by this “futures industry” equate utopia with a deracinated, high-tech future and dystopia with the seemingly “primitive” or low-tech communities of contemporary Africa, the Caribbean, and inner city America. As such, they participate in a speculative narrative tradition stretching back to H.G. Wells and H. Rider Haggard in the nineteenth century. Yaszek argues that black Atlantic artists combat these bad futures by appropriating the science fictional figures of the creative engineer, the space explorer, and even the science fiction fan for their own ends. Focusing on the work of Amiri Baraka, Nalo Hopkinson, and Minister Faust, Yaszek proposes that Afrofuturist authors do more than simply put a heroic black face on the future. Instead, in good science fiction fashion, they insist that we must abandon outdated visions of utopia and dystopia and boldly go where no one has gone before by extrapolating new ethical and aesthetic ideals from the history of the African diaspora itself.
The members of this panel examine how visual media function as utopian tools for recoding conventional understandings of science, society, and self for contemporary culture. The topics addressed in the panel discuss methods employed in contemporary media that update traditional notions of the self in a technological society. The first two panelists focus on television texts and their subversions of conservative representations of the body and identity. The third panelist discusses the relationship between the self and the broader social and legal systems the self inhabits.
In this paper, I use theoretical concepts drawn from science studies and queer studies to demonstrate how the primetime television drama Nip/Tuck subverts heteronormative gender codes and replaces them with new ones more appropriate to the new millennium. Traditionally, television shows reiterate conservative notions of sex and gender that posit male and female as biologically determined, naturally opposed sex identities which produce naturally opposed masculine and feminine gender identities. These representations are problematic because they reinforce socially constructed sex and gender codes and are mechanisms of the heterosexual matrix. As a primetime adult drama, FX Network’s Nip/Tuck exploits the genre expectation of scandal and outlandish narratives to subvert the heteronormative matrix and traditional television representations. The series utilizes the trope of plastic surgery to question the assumed naturalness of biologically determined sex identities and gender codes present in mainstream American culture. Analyzing Nip/Tuck through the lens of science studies enables us to better understand Nip/Tuck’s representations of the plasticity of the body and the divide between biological and psychological identities. Analyzing the series through queer theory highlights the series’ criticism of socially produced and policed codes of gendered behavior. Thus I demonstrate how the series presents an alternate society where traditional sex and gender codes are no longer relevant and a broad spectrum of identities are not only possible, but also embraced.
Discusses the changing representations of self in a post-War on Terror society. The 2003 re-imaging of Battlestar Galactica (BSG) elicits a re-imagined threat from autonomous technology created by humanity—Cylons. Unlike the original BSG series, the latest reincarnation introduces a more human cyborg variant of the Cylons affectionately referred to by the surviving humans following a nuclear sneak attack as “skin jobs.” BSG goes beyond the Cold War representations of infiltration such as Invasion of the Body Snatchers and The Terminator. Some Cylons are programmed to live human lives not knowing that they are in fact artificial beings at war with humanity until a signal awakens the “sleeper.” Additionally, two central themes of the series concerns Gaius Baltar’s fear/hope that he is a Cylon as well as the Cylon known as D’Anna Biers/Number Three seeks to learn the faces of the five unknown Cylons. These anxieties about identity are unique to BSG, because identity is destabilized on both sides—the biological humans and technological Cylons. Both groups share similar anxieties, which equates the two groups as one and begs the question—are we becoming Cylons?
The acceleration of technological development coupled with other real world anxieties such as the Global War on Terrorism are themes often implicitly as well as explicitly confronted in BSG. In a post-Cold War age, the “sins of the father” (i.e., discouraging discussions about the war) are repeated. Again, SF serves as a space in which discussion can begin, because the veil of disbelief defends and insulates the serious subject matter presented from direct assault from pundits aligned with the current American political regime.
Other post-War on Terror SF that connects to BSG includes the third season of Star Trek: Enterprise (Starfleet seeks to prevent the destruction of Earth after an unprovoked sneak attack) and Bill ’s Sunshine Patriots (race and class subjugated cyborg warriors fight an unjust war on the foreign soil of another planet).
Much of the early rhetoric surrounding the Internet has been criticized for its investment in positivist, Utopian thought. However, the Utopian character of this rhetoric is fundamentally tied to ideas of Utopia as a space, a real and existing perfect community. As various critiques (most notably Deleuze’s writing on the control society) revealed potential dangers inherent in this line of reasoning, this Utopian rhetoric was jettisoned in favor of a sober realism that resisted both the blind optimism of early cyberculture and any discussions of Utopian qualities of life online. In this paper, however, I argue that the Internet is Utopian but that these early writings draw from a non-nuanced theoretical understanding of Utopia. Instead, I foreground Fredric Jameson’s understanding of Utopia as a cognitive process as a model for thinking about online culture. By using this definition, I retain the politically and socially progressive character of early Internet writing while also maintaining the sober understanding of the rigors of late-capitalism inherent in both Jamesonian thought and later criticism. This discussion is facilitated through an analysis of Sweden’s Pirate Bay group and its recent efforts to subvert and recode copyright law. Ultimately, I conclude by suggesting that we can conceptualize the Internet as a Utopian tool whose very nature does not encode a specific political ideology or facilitate electronic instantiations of spatial utopia.
Victoria Alexander will read a short story called “The Narrative” about a message without a sender, inspired by complex system studies, information theory and biosemiotics but having nothing to do, literally, with any of them.
Steven J. Oscherwitz will discuss his art focusing on Husserl’s writings on internal-time consciousness and nanotechnology intertwined with some histories of science.
Joseph Duemer will read from his poetry. He has a particular interest in the relationship between science and the arts.
“Biological imperialism” is Alfred Crosby’s term for the animal and environmental dimensions of the history of globalising modernity. He describes how colonists travelling to the Americas and the South Pacific took with them “a scaled-down, simplified version of the biota of Western Europe”; a “grunting, lowing, neighing, crowing, chirping, snarling, buzzing, self-replicating and world-altering avalanche” (Biological Imperialism, 1986, 88, 194). Yet, rather than simply sculpting their new territories into obedient replicas of home, these human and nonhuman colonists found themselves occupying what Nigel Clark calls “zones of turbulence,” created by the meeting between complex systems. The unpredictable nature of such meetings resulted in states of ferity. As “the first elements to break out of the state of equilibrium” (Clark, “Wild Life,” 1999, 152), feral species provide vivid opportunities to study the ideologies and practices of modernity, both in their ideal state and in their breakdown. If modernity has a code, feral animals are its code-breakers.
There is also a literary history of ferity. Robinson Crusoe’s goats, Gulliver’s Yahoos, Frankenstein’s Creature, the “coming beast” that invades the scientific romances of H.G. Wells, are all manifestations of the peculiarly modern experience of nonhuman ferity. My paper will survey the place of code-breaking feral animals in these texts and in some of their more recent re-workings: Timothy Findley’s Not Wanted on the Voyage (1984), Peter Høeg’s The Woman and the Ape (1996), and Margaret Atwood’s Oryx and Crake (2003).
Evolutionary ideas that emphasized a linear, progressive development of society saturated social thought in England and the United States in the nineteenth century and influenced intellectuals’ understanding of the human/animal relationship. Influenced by Morgan’s, Darwin’s, Taylor’s, and Spencer’s thoughts on the human/animal relationship, writers of American periodical literature in the second half of the nineteenth century viewed the Western domestication of animals—the selective control of animals’ breeding habits and Western adaptation of indoor companion animals, especially the dog—as evidence of societal progress. The domestication and adaptation of animals allowed these writers to differentiate Americans from not only other animals, but also from other cultures that did not domesticate animals or that did not adapt animals to the human environment. When comparing different cultures’ ways of relating to animals, these writers often employed a familiar evolutionary framework to disparage non-Western societies’ failure to domestic animals, their different ways of relating to animals, their tendency to have “backward” dogs, and their supposed closer affinity to animals. A dissenting viewpoint, also found in the periodical literature, saw disturbing aspects of the human/animal understanding in Western society and suggested that human treatment of animals revealed a darker side of “progress.”
In 1603, Edward Topsell’s the Historie of foure-footed beastes boasted descriptions of “the true and lively figure of every beast.” Collected and translated from sixteenth-century naturalist Conrad Gessner, words and images gave shape and form to Topsell’s “figures.” The Historie, arranged alphabetically by animal name, offered information concerning the appearance, habits, and usefulness of each animal and relayed stories about their behaviors and interactions with humans. Woodcut images dramatically filled the pages and played prominent and essential roles in many of the descriptions.
What made a figure “true and lively”? More specifically, what role did images play in Topsell’s descriptions and in what ways did they contribute to the truthfulness and liveliness of the descriptions? My paper aims to examine how Topsell took the Historie’s images seriously as a means of knowing nature. I begin with the assumption that images went beyond supplementing and illustrating words and worked as key tools in understanding natural objects. Topsell provided his readers with a guidebook. Using images and words, he granted readers specified tools to access and experience four-footed beasts and assembled before his audience a particular vision of nature. My paper investigates the role images played in shaping Topsell’s view of nature and the ways images worked to instruct and guide readers to properly see and understand the forms of nature.
Issues that designated panelists will address include gender roles depicted in the novel, the plausibility of the plot of scientific misconduct, and the linking of scientific ambition and deception in a period when public understanding of science is a problem.
Plot Summary (from Publishers Weekly as on www.amazon.com):
Starred Review. In another quiet but powerful novel from Goodman (Kaaterskill Falls), a struggling cancer lab at Boston’s Philpott Institute becomes the stage for its researchers’ personalities and passions, and for the slippery definitions of freedom and responsibility in grant-driven American science. When the once-discredited R-7 virus, the project of playboy postdoc Cliff, seems to reduce cancerous tumors in mice, lab director Sandy Glass insists on publishing the preliminary results immediately, against the advice of his more cautious codirector, Marion Mendelssohn. The research team sees a glorious future ahead, but Robin, Cliff’s resentful ex-girlfriend and co-researcher, suspects that the findings are too good to be true and attempts to prove Cliff’s results are in error. The resulting inquiry spins out of control. With subtle but uncanny effectiveness, Goodman illuminates the inner lives of each character, depicting events from one point of view until another section suddenly throws that perspective into doubt. The result is an episodically paced but extremely engaging novel that reflects the stops and starts of the scientific process, as well as its dependence on the complicated individuals who do the work. In the meantime, she draws tender but unflinching portraits of the characters’ personal lives for a truly humanist novel from the supposedly antiseptic halls of science. (Feb. 28)
Participation will be open to anyone at the conference who is interested in attending. Allegra Goodman’s Intuition will likely prompt a lively discussion as the novel offers an inside look at a scientific lab and examines multiple motivations at play in big scientific endeavors.
Prosody, the suprasegmental or non-lexical aspect of human speech, is cued by fluctuations in pitch, intensity and duration. While these fluctuations are requisite physical attributes of speech, they are consciously controlled and are essential to communication. There is evidence that, even in the absence of an acoustic signal, e.g. while reading, readers impose a prosodic contour on read passages. Therefore, prosody exists as a second information stream encoded in speech, overlaid on the sequences of words in an utterance. Prosody is relatively well understood in terms of its acoustic markers, somewhat understood in terms of its distinct phonological units (accents and phrases) but not well understood in terms of its full communicative functions. While some of these functional applications of prosody, such as using accents to signal contrastive stress (/I didn’t SAY you were an addict./), or intonational phrase breaks to indicate syntactic units (/I don’t think; I know./), have been investigated in detail, the socio-linguistic aspects of prosody, that is, the uses of prosody to communicate nuances of meaning, have not been as well studied, despite the persistent belief that “not just what a person says, but how they say it” is significant. Decoding prosody is further complicated by some evidence that a single meaning might be encoded in several prosodic implementations and that a single prosodic implementation might in fact signal several distinct meanings. One hypothesis is that the prosodic system is inherently ambiguous which serves the communicative role of advancing a position without necessarily committing to it.
In the recent novel Jayber Crow, Wendel Berry fictionalizes the ecological philosophy he developed early in his career in The Unsettling of America. A reviewer of Berry’s novel credits it for “warmth and luminosity” in spite of its “freight” of ideas, while Berry himself warns readers in an opening epigraph to beware of those who would find his novel a “text,” search out a “subtext,” or “explain,” “interpret,” “explicate,” “analyze,” “deconstruct,” or otherwise “understand” it. In part his resistance to such maneuvers simply rings with the fiction writer’s distain for the death of the author, partly it exemplifies his philosophy itself, wary as he is of the “specialists” who have destroyed subsistence farming and ruined the planet. This paper will explore these comments in another context, however: that of Linda Hutcheon’s recent work A Theory of Adaptation. She reconsiders the derivative implications of adaptation by considering them as both product and process. In an attempt to question a hierarchy of genres, media, and modes of engagement, she examines adaptations not just as product but as process of creation and reception. She concludes that “there are precious few stories” that have not been “lovingly ripped off.” Texts are “transcodings” and “inherently palimpsestuous,” “haunted at all times by their adapted texts.” Although most adaptations are also the work of the originating author, some are the work of collaborations with authors. Ironically, in this case it is Berry himself who offers us his own critique or interpretation, as well as an illustration of Hutecheon’s conclusions. In one sense of coding, his previous text is an instruction in how to proceed with adaptation and illustrates the creative transformation necessary to meet the needs of a new genre and context. In another sense, the novel is read intertextually, at least by those familiar with his earlier work, as a palimpsest, veil, or code revealing its forbearer. Both works retell as well an earlier story, confirming Hutcheon’s belief that adaptations are not derivative aberrations but central to an evolving cultural tradition. Texts are repetitions with variation and so echo evolutionary processes on the level of nature—of niche construction, Baldwin effects, acquired genomes, and non-equilibrium dynamics as well as Hutcheon’s concerns with the what, who, why, how, where, and when of adaptation.
Out of the intellectual ferment of the early 19th century, modern Slavic culture was born. And at the center of this process, propelling it clear across East Central Europe, towered the spindly, red-haired figure of Jernej Kopitar (1780-1844)—censor and librarian to the Hapsburgs, prolific journalist, and father of modern Slavic philology. The son of a Slovene peasant, admired by Goethe and Jakob Grimm, Kopitar was both a formidable scholar and unlikely celebrity notorious for his eccentricity and caustic wit. More significantly, he was also a thinker caught between two intellectual worlds: the late Enlightenment and early Romanticism.
Nowhere is this uncomfortable position more evident than in Kopitar’s quixotic efforts at Slavic alphabetic reform. Like many thinkers of the time, he believed that the Slavs comprised one nation, speaking dialects of one language. In his Slovene grammar (1809) and a series of articles in the Vienna press, Kopitar argued that this far-flung nation could effect cultural rapprochement by adapting a single alphabet. This was a quintessentially 18th-century project, coming straight out of Herder and Schlozer, and capturing the universalizing tendencies of the Enlightenment; but the fiery Kopitar argued it as a Romantic, alienating his mentors and arousing particularist passions in his disciples. His putative Pan-Slavic alphabet had been intended as a code that could unite “50 million Slavs” against the Germans; by the 1830s, however, it was a catchword for all the reactionary forces in Austria that were conspiring to thwart Slovene (and Czech, and Croat, etc.) aspirations.
Ever since Galileo, modern scientists have claimed to possess a sacred text to which they have found the interpretive key. The trope of the Book of Nature was inherited from a medieval world that believed it to be a complement to, or another version of, sacred scripture itself. Since the Enlightenment, scientists have often used the Book of Nature to trump scripture. Structures of meaning and guides to human conduct are read out of the cosmic order revealed by science. There are also negative readings, which contend that the world reveals the meaninglessness or insignificance of life. The human genome, the idea of dark matter, and the theory of the big bang have all been used as fodder for such assessments.
Few attempts to interpret cosmic order seem to recognize their own status as secularized natural theology. Scientists who read metaphysics or ethics out of ontology fail to explain their own decryption protocols, pretending to a semiotic transparency that nature does not possess. Therefore, it may be time to resurrect a Montaignian skepticism about all forms of natural theology, and offer a code of unknowing which admits, as an interpretive principle, that a shadow falls between physical phenomena and their meaning for human life, a shadow we may not be able to dispel all at once. Hermeneutical modesty, which has already gained ground in feminist Science Studies, may be a key to unlocking the cage of modern anomie.
Francis Bacon’s bilateral ciphers seem to offer us a model for the greater “scientific” project of the “Interpretation of Nature.” The objects of study would appear to the natural philosopher as encoded in a double language; on the one hand, they speak to the fallen intellect of man, while, on the other, they reveal the power of God by playing a role in His Providential plan. Viewed in this way, we can see that the intellectual tool offered by the New Organon ostensibly enables the investigator to act in accordance with the divine motivation that directs the object’s potential and that it does so by suppressing the fallen intellect’s inclinations. While the investigator will recognize the success of his/her inquiries by the applicability of his/her knowledge, there necessarily remains a secretive impulse behind that knowledge, for the divine plan itself can never be understood.
Code, as it presents itself in Bacon, seems to be of such a nature that it can only be ‘cracked’ in translation rather than in the language of its divine creator. The consequence, it appears, is that the truth of the code, linguistic though it may be, ultimately resides outside language itself and within the realm of practical use. By relying on the theory of Walter Benjamin, my paper investigates the consequences that this relation of the objects of nature to code, translation, and truth in Bacon’s New Atlantis and Advancement of Learning may entail for the ontology of the new science.
The world in which we live is made up of complex systems, from the cells in our body to the world econosphere. Complexity theory asks what characteristics allow these systems to function and grow at such an advanced level of interaction and differentiation. Looking particularly at the findings of Santa Fe co-founder and complexity theorist Stuart Kauffman, I will explore the idea that our biosphere—and our culture—expands on the “edge of chaos,” as Kauffman terms it. RNA demonstrates the chaotic antithesis at the heart of complex systems: it thoroughly orchestrates the successful construction and differentiation of cell types but it is also indifferent in that it fails to prevent spontaneous mutations and the creation of novel structures. These mutations beget competition among cellular structures within the system, allowing for the fitness of a species to increase in a classic example of evolution. Do complex systems emerge only on the edge of chaos and away from too thoroughly calculated and safe interactions? I end my discussion by extending these concepts to realms outside of science. For instance, might we say that language or philosophical discourse exists on the edge of chaos?
In Fast, Cheap and Out of Control (1997) documentary filmmaker Errol Morris brings together interviews with robot scientist Rodney Brooks, topiary gardener George Mendoça, wild animal trainer Dave Hoover and expert on mole rat behavior Ray Mendez to create a film that he describes as about “deeply weird animal stories.” Through clever intercutting of scenes and the juxtaposition of sound from one interview with visual footage from another, Morris presents an elegiac meditation on our complex and problematic interactions with non-human species. Brooks creates robotic insects and speaks animatedly about the replacement of carbon-based life with silicon life forms that “are just different ways of living,” while Mendez is fascinated by the insect social structure of the mammalian mole rats, a species that “breaks the rules” and fascinates us with its image of “life that exists irrelevant to yourself.” Mendoça speaks with great affection of his animals sculpted from privet and the particular care each ‘animal’ needs, while Hoover focuses on a romanticized version of his own heroic domination of the wild beasts in the circus, ironically commenting that “the problem with the wild animal act” is that the animals all have individual personalities and desires, “like people.”
Through its interwoven narratives about creating life, controlling life, finding ourselves in the mirror of another social animal, and attempting to interact with those whose embodiment and experiences are alien to us, Fast, Cheap and Out of Control interrogates the problematic intersection of material animals, human scientific practice, and the abstract concept of ‘the animal’ in contemporary culture. The film reveals how difficult we now find it to sort human from animal, natural from artificial and through its use of stock footage from science fiction and jungle adventure movie serials, connects the stories of these particular men to a larger cultural world of fantasies and ideas we have projected onto the idea of non-human life, animal and otherwise. Using Morris’s film as a starting point, this paper will explore the place of animals in contemporary technoculture drawing on Jack Turner’s The Abstract Wild (1996), Gregg Mitman’s Reel Nature (1999), Akira Lippit’s Electric Animal (2000), Nigel Rothfels’s Savages and Beasts (2002) and Donna Haraway’s The Companion Species Manifesto (2003).
The Birds of North America Online (BNAO), brought online in 2004, complements the print version of Birds of North America, which was completed in 2002 after ten years of work. Both texts, published by the Lab of Ornithology at Cornell University, provide users with life histories and descriptions of all of America’s and Canada’s breeding birds. Unlike the print version, BNAO gives users access to sound and video clips of these birds. In addition, the online version allows for more timely revision than the printed text. BNAO, for instance, takes into account the recent creation of new species, such as the Cackling Goose’s elevation from subspecies (of Canada Goose) to species, and the splitting of the Blue Grouse into Dusky Grouse and Sooty Grouse. In fact, this ability to revise archival material may be the most important part of the BNAO, as it reflects the constantly changing ornithological classification system that is nowhere near as stable as one might think. This paper will examine how the fluidity of web publication connects to the fluidity within the seeming static binomial classification system of life. The ability to constantly revise the classification of bird species might be a means of reordering what Foucault calls, in The Order of Things, “the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things” (xv). In short, I am interested in how BNAO mutates and is mutated by our understanding of what a species is.
In The Extended Organism: The Physiology of Animal-Built Structures, J. Scott Turner proposes in 2000 a radically redesigned paradigm for defining what constitutes “life,” pointing out that the structures animals build—from the tiny burrows of earthworms to the Great Barrier Reef—harness and leverage the flow of energy in their natural habitats to their own advantage. These structures, he argues, should therefore be understood as external physiological organs and thus extensions of the animal’s phenotype. Turner even classifies the ant’s system of confining milk-producing aphids within anthills as such an extension, thereby including a second species within the phenotype of the ant. Turner’s persuasive dismantling of biology’s deeply-ingrained binary between organism and environment has implications for systems theory and for understanding the relations between humans and the cultural systems they generate: in particular, languages and codes. If the distinction between the system that is “homo sapiens” and the system that is “English” is an arbitrary distinction, as Turner suggests, then what is the relation between the two? I want to advance the claim that the relation is similar to that between higher animals and parasites, particularly viruses, spirochetes and prions, entities that come “alive” only by reproducing their bio-codes through higher organisms. Michel Serres may be correct to say, “we are all parasites of our language(s),” but perhaps our languages are also parasitic on us. More broadly, we are parasites of our codes, our codes are parasitic on us, and the process of parasitization is the fundamental mechanism by which the “extended structures” of our phenotype grow and propagate.
Severo Sarduy (1937-1993), one of the most innovative and challenging Latin American writers of the twentieth century, remains to date relatively unknown outside the world of Hispanic letters. And yet his varied interests led him to write across genres: novels, essays, poetry, play, etc., and to produce a body of plastics works, exemplary of an accomplished painter. Among the numerous theoretical essays he wrote were Escrito sobre un cuerpo (Written On A Body), Barroco (On the Baroque), and “El Barroco y el neobarroco” (“The Baroque and the Neobaroque”). His interest in art coincided, as it did for many of the painters of the Renaissance with an interest in science—or to put it another way, with the relation between the painterly and scientific figure, as mediated through language (Barroco). To that end, he traced the rhetorical force of Galileo’s language to his privileging of the circular motion of the heavenly bodies, and that in turn, to his rejection of allegorical and mannerist aesthetics. And in Kepler’s reluctant acceptance of elliptical motion he saw the anticipation of Borromini’s use of the ellipse in the architectural designs for San Carlo Alle Quattro Fontane. Never fearful of taking chances, the next step he ventured was to postulate a relation between the pictorial figure of the ellipse and the anamorphic image (e.g., Holbein’s The Ambassadors).
While great deal of ink has been spilled in reiterating the most obvious things about the Latin American Baroque, few are the critics who have taken the time to go back to the references in Sarduy’s texts to examine: 1) the way in which he particularly understood the art-science tradition of the High Renaissance, Mannerist, and Baroque epoch, 2) the impact which that had on what he wrote and what he painted, and finally, how as a Latin-American writer, he viewed his tradition and himself in a trans-Atlantic context. Big Bang, one of his books of poetry—Mannerist in style—is the ultimate expression of a scientific-aesthetic tradition, which for him continued to live in the “new world.” In short, then, the aim of this paper is to shed some light on this most stylistically innovative, experimental even, traditional writer, and his views of art and science through the telescope of language and culture.
Early-19th century French military painting has often been understood in terms of the formalist dynamics of neoclassical History painting; more recent accounts have productively analyzed it in terms of Napoleonic ideology, with depictions of battles used to propagandize a certain fiction of empire. My paper will consider these pictorial “machines” in terms of military tactics and strategy, and set these paintings within the context of the information society that began to develop in late-18th century France. How were military paintings involved in embodiment, the very encoding of bodies? I will focus on Antoine-Jean Gros’ “Battle of Nazareth” (1801) as paradigmatic case in this regard: the painting’s eschewal of standard topography and its sheer pictorial fragmentation made it an unusual instance of military imagery able to think the problem of coding and information. The irony of the painting, we might say, is that in working to achieve a certain informational repleteness – a form of distributed intelligence – in paint, Gros brought about its very demise. What Gros’s painting participated in was not merely the imaging of the military, but the very militarization of the image in early-19th century France.
Numerous artistic and scientific projects claim to represent reality; however, the relationship of the representation to the thing represented is far from universally consistent. Digital photographs taken through advanced telescopes show us scenes of glowing stars and galaxies, and we suppose they mirror the experience of looking through such an instrument. In other cases, like DNA maps, we do not assume a similarity to the appearance of subvisible objects, but believe that the representation links back through a chain of references to the original source.
To correctly understand and assess the reality of these examples, we must understand a variety of codes for realism. But the differing relationships to reality are also not mutually exclusive, and often they are combined in a single representation. The appearance of Hubble Space Telescope images depends as much on image processing choices that refer back to physical properties as on verisimilitude. The artwork of Gail Wight, which positions DNA maps as portraits, requires us to see the abstract constructions as mirrors of the species they represent. Pieces by C5: The Landscape Initiative bring together photographs, aerial views, and GPS maps, juxtaposing and combining a variety of codes for depicting the landscape. These and other examples of artistic and scientific projects that conflate different codes of representing reality raise question about how we interpret the relationships between things and their representations. Does the combination of different definitions of realism strengthen the correspondence or confuse it? The context of a representation—the closeness of the connection to art or science—may also change our expectations and assessments.
The winter of 1683-1684 stood out in its severity, and many English observers recorded the effects of the extreme cold. John Locke wrote from Amsterdam to describe the winter as the coldest “in the memory of man.” Back in London, John Evelyn registered the combination of festivity and alarm that the frigid weather occasioned in a diary entry made on the 24th of January: “The frost still continuing more & more severe, the Thames before London was planted with booths in formal streets, as in a city, or continual fair.” Evelyn considered human beings susceptible to climate, like other living organisms, and described its effects as a function of physiological mechanisms that have psychological outcomes. According to his model of the human body, heat stimulates the mind (unless it leads to delirious fevers), while cold makes for a robust constitution (unless it causes contractions of the intestine and dulls intelligence). English culture, according to Evelyn’s contemporaries, took shape around the embodied polarities of pyrexia and hypothermia, north and south. A string of harsh winters provided unwelcome evidence that Britain should count itself among the frigid places of the earth, and that London’s claim to a privileged position among the urbane required defense on both literary and scientific fronts.
Stories of strange weather abounded in 1684, many of them focused on the relationship between “prodigious” atmospheric events and disease, the marvelous and the ordinary. In this talk I look at some of the ballads and broadsides published in 1684 that drew attention to the deleterious effects of frigid weather on bodies and spirits. These include “A True Description of Blanket Fair upon the River Thames,” which dampens the carnival atmosphere to warn of the dangers of hypothermia and starvation. I also discuss Thomas Tryon’s Modest Observations on the Present Extraordinary Frost, John Peter’s A Philosophical Account of this Hard Frost (from which I draw my title), and a painting, now on display in the London Museum, titled “A Frost Fair on the Thames at Temple Stairs.”
“Hard Frost: 1684” takes up diaries, popular poetry, painting, and natural philosophy to explore the widespread fear that England had a tenuous hold on its temperate climate, that a sudden shift could plunge its growing metropolis into another dark age. A temperate climate supposedly made the English more fit for rule and ascendancy than peoples in frigid or torrid climates. The fear nevertheless remained that England had a tenuous hold on its weather. Climatological determinism—the view that geography conditions identity, with its implicit case for the superiority or inferiority of particular national cultures—had its roots in a Hippocratic revival that began late in seventeenth century. Hippocrates’ authority supported speculation about the long term effects of a neo-boreal trend on English bodies and the English character. Bitterly cold winters provided experiential evidence that Britain should perhaps count itself among the frigid places of the earth. Claims to a place among civilized nations now demanded a new theory of medical climatology and new ways of reading the sky for explanations of the British cultural condition.
In this paper, I explore Daniel Defoe’s depiction of a natural world in crisis in his compilation of accounts of the violent storm over southern England in 1703. As I have argued elsewhere, early modern literature, including voluminous non- fictional writings on agriculture, weather, and navigation, reveals complex, dialectical, and even incoherent visions of the natural world that complicate the ways in which individuals perceive “Nature.” Confronted by the effects of devastating storms that ravaged England in the early eighteenth century, Defoe’s A Collection of the Most Remarkable Casualties and Disasters, which Happened in the Late Dreadful Tempest (1704) superimposes naturalistic and theological interpretations of the storm of 26 November 1703 in an effort to understand the significance of the short-term devastation and long-term implications of such violent weather. Defoe’s readers, I suggest, are encouraged to internalize the unpredictability of climate—storms, damaged buildings, blasted harvests, and shipwrecks—as “natural.” Defoe’s commentary on the accounts of the storm thus serves as a springboard to explore the ways in which climactic conditions during the Little Ice Age (c. 1350-1850)—shorter springs, longer winters, and often abrupt and violent shifts in weather patterns —affected both agricultural productivity (at a time when 90 percent of the population lived in rural areas) and the very conception of “Nature” itself.
In the opening paragraphs of Melancholy and Society, his sociological account of melancholy and political estrangement, Wolf Lepenies distinguishes his topic both from individual diagnoses and national characterizations, considering the former but a therapeutic “aid for orientation” and the latter as wholly arbitrary and inconsistent. Instead, Lepenies insists, a sociology of melancholy should not “refrain from incorporating ‘history,’ meaning historical reflection on the genesis of self-evident truths.” Yet from classical times through the eighteenth century, as Clarence J. Glacken and those following in his scholarly footsteps have demonstrated, environmental theories of climate and culture proved a remarkably resilient means of negotiating between material diagnoses of individual pathologies and geographical economies of climatic variables. Posited relations between climate and the mores, laws, and religious beliefs of different peoples from Hippocrates through Montesquieu established a basis for early ideologies of national governance and authority alongside conjectural histories of social and cultural development. The national characterizations Lepenies dismisses as a precondition for the establishment of a sociological method offered precisely the critical perspective on culture, history, and society his aversion to psychoanalysis seeks to engender.
Indeed, climatic accounts of melancholy as the defining attribute of northern peoples informed much of the early French sociological tradition from Condorcet, Destutt de Tracy, Sieyès and other writers associated with the Class of Moral and Political Sciences in the revolutionary French National Institute as well as Mme de Staël, Benjamin Constant, and Auguste Comte. Their constructs of melancholy as a historically and environmentally determined condition not only contest the classical tradition’s emphasis on the constancy of human nature but often make a virtue of a pathology, elevating melancholy as the temperament best suited for sociological reflection. Exploring aspects of these early social theorists helps to recast the defining temperament of European Romanticism as originating in climatic models of sociological hermeneutics.
We could see code as absolute and unambiguous, a signification of ones and zeroes which operates on a logic of computational precision and pure performativity, beyond the structures of writing and speech. However, as Adrian MacKenzie has argued, there is an undecidability, a series of non-computable numbers, which enable this pure computation. As a technology of transmission, it is important to see code as adestinational, subject to interruptions, noise, and thus miscommunication. Even at the level of the zero and the one of machine code, or the materialization of this difference in voltage, the ambiguity and undecidability which informs structures of signification return. Thus far from being the other of code, bugs, viruses, and computer crashes are necessary parts of a coded metaphysics. The radical iterability of these zeroes and ones necessatites that we pay even greater attention to the discreteness and spacing which constitutes code, even if that discreteness and spacing takes place as an acceleration of coding marks.
I want to argue that one of the central places that we see the adestination of code is in the structure of comments, or lines of code which are placed within the script but not processed by the computer. Far from directing itself only to the machine, code necessarily contains an expectation of human interaction. Highlighting this place where language and code cross-over, we can see that code becomes less about machine interaction and more about communicating to another programmer.
Key texts in molecular biology, infinitist mathematics, and the diverse musical minimalisms borne out of John Cage’s attention to “the frame” (such as La Monte Young’s compositions) seem to share more than just a concern with coding, as theological presuppositions and ad hoc metaphysics mixing codes of rationalism and mysticism in these discourses also produce fascinating transcodings that call to mind the technological and artistic heritage of Pythagoras and Plato. Scientists and artists in this lineage gravitate to versions of order that adhere in Bateson’s coding primer, “Every Schoolboy Knows....” where parsimony is posited as the fundamental presupposition of coding practice (Mind and Nature 23-37). Where Bateson likens coding’s premise to the figure of Occam’s razor, this presentation will instead weave an analogy to the similar but perhaps “lossier” trope of “compression” as it appears in another Greek musical tradition, that of the harmonikoi and of Aristoxenus of Tarentum, and compare the coding practices of their protos chronoi (primary time-lengths, the musician’s equivalent of the geometer’s point) with the meaning and utility of data compression today, including a consideration of so-called pseudocode, reflective/dynamic/object-oriented programming languages, sampling in dj culture, tagging, and codecs such as mp3 and oggvorbis. These coding practices have dramatically reorganized the function and phenomenology of music, programming, and writing in ways neither the Greeks nor the minimalists could anticipate, and this presentation will conclude with an allegorical account of “the listener” in these coding regimes.
The closing allegory will accompany a sound installation. Fragments of mantra, shards of tuned frequencies from an analog coupled oscillator, and free audience participation will provide coding elements, and George Gamow’s diamond code diagram (vis-a-vis Rich Doyle’s rhetorical analysis Gamow’s codes, cf On Beyond Living 39-64) will provide metacode. Simple coding elements and fragments will be offered up in advance, in stages, so that interested members of the SLSA community may, by means of an open-access wiki, participate in the sonic coda to the paper presentation (http://protoschronos.pbwiki.com/FrontPage).
In the last few years, discourses on code and programming languages have exploded. Many theorists, programmers and artists continue to ponder the relationship between source code and the programmed, aesthetic object, between surface effect and hidden code, between natural and formal language. From “flickering signification” to notions of code as executable or performative, this paper will revisit theories of code that have attempted to unpack the meaning and functionality of programmed language. What do these theories teach us about code and its increasing impact on our culture? About the interpretation of programmed artworks? Can semiotics assist us in thinking more clearly about the nature of code? While this paper revisits previous theories concerning code and programmed language (in order to question their usefulness) it also addresses code in relation to the linguistic concept of motivation. What can be gleaned from programming languages if examined in terms of arbitrariness and motivation?
Computer art is often associated with computer-generated expressions (digital audio/images in music, video, stage design, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated code, etc.). This paper will analyze and discuss code as the artist’s material. In particular, it will focus one particular artistic code- praxis: the Live Coding performances of Slub (programming computer music live, visually showing the coding).
The artists Alex McLean and Adrian Ward (aka Slub) along with Geoff Cox declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” (Cox et.al. 2004) The paper will argue that this statement formulates a media critique. As Florian Cramer has proposed, the GUI represents a media separation (of text/code and image) causing alienation to the computer’s materiality/text. (Cramer 2003) The paper will then propose that object of art oriented programming – in an avant-garde perspective – must be to recuperate an interchangeability of data and processing. How?
The particularity of Live Coding does not rely on the magical expression – but nor does it rely on the code/material/text itself (as one might say is the case in some code-art). It relies on the nature of code to do something – as if it was magic: in the performative aspect of the code. Relying on performance theory (Austin, Carlson) the paper will demonstrate how the computer in the Live Coding sessions is much more than mere mechanic performance. The paper will explain how code itself is staged as performative language (interchanging data and process) and further focus on the performance of code before an audience. Arguing that the performance excludes the audience (esoteric code), the paper will raise the question of whether listening to the code (at a non-textual level) may provide an embodied experience of data-processing. The performance can be regarded as a collective appreciation of the code artist as a musician using code as his/her instrument, interchanging data and process live.
This panel considers the thematic, cultural, and ethical implications of the very tiny (e.g., nano- or micro-) through readings of visual, nonfictional, and literary media, including film, television, scientific and belletristic texts. What do past studies and depictions of nanotechnology, particles, or the microscopic tell us about society today? Where and how do we locate the very tiny in the public imagination? How can viruses, microbes, atoms and nanotechnology be read with and against each other to produce new narratives that decode cultural anxieties and suggest ethical solutions?
Recent shifts in funding protocols of the National Science Foundation have expressed a growing concern within the organization for the social significance of its sponsored research projects. In addition to assessment based on the “intellectual merit” of the research, proposers must also address the “broader impact of the proposed activity,” including any potential benefits to society. The National Nanotechnology Initiative, the preeminent umbrella organization charged with the management of federally funded nanotechnological research and development, has incorporated at a fundamental level within its governing directives a similar interest in the “societal dimensions” of nanotechnological research and development. The “broad implications” of nanotechnological research and development for both NSF and NNI are, as well, entwined with mandates to mix related educational programs with the activities of scientists and engineering. Social scientists and humanities scholars have attempted to occupy these ostensibly ready-made positions and capitalize on the opportunities to effect novel technology development, scientific knowledge creation, their respective public perceptions and the educational experiences of technical researchers-to-be. A primary tool for such interventions has been science fictional works of literature focused on nanotechnology and nanoscience. Using the works of preeminent authors within the sub-genre of nanofiction—including Greg Bear, Neal Stephenson and Kathleen A. Goonan—contemporary strains of science fiction criticism—Darko Suvin, Samuel Delany, Marleen Barr, Carl Freedman—and technoscientific literatures—Carlo Montemagno, The Vicki Colvin Group—this paper will present reflections on and an assessment of the study of societal implications of nanotechnology through nanofiction.
Robert Hooke’s Micrographia (1665) attempts, like geometry, to base itself upon a point, a point that viewed under the microscope soon undercuts and deconstructs this project. The infinite divisibility implied by this failure leads Hooke to a comparison between the vastness of the planetary system and that of the microscopic world. In 1759 Edmund Burke’s treatise on the sublime echoes this comparison. But it is left to postmodern literature to return to the micro-sublime and fully to explore its implications through, as it were, a lens of its own. Passages from Nicholson Baker’s The Mezzanine, Robert Irwin’s The Limits of Vision, and John Barth’s Lost in the Funhouse reveal a micro-sublime that is ludicrous—both ludic and laughter-provoking. This postmodern play throws into question Kant’s notion that the psychological effect of the sublime is to cause reason to rebound with a sense of its infinite powers. Rather the capacity of the imagination—the very thing that is supposedly transcended by reason’s more capacious view in Kant—is put front and center. Finally, Luigi Serafini’s fantastical encyclopedia Codex Serafinianus (1981) puts text itself under the microscope, with surprising results.
In a forthcoming essay, “Quantum Poetics,” Stephanie Strickland outlines issues that inform her digital practice, some of which are: first, the discovery or refinement of time dimensions, from macroscopic “worldliness,” to engagements at the periphery of attention, to “curled-up” hidden possibilities; second, privileging a paradigm for interaction she refers to as “moving through me as I move”; third, cultivation of a flickering attention, directed not only to components but also to emergent levels; fourth, remolding neuro-cognitive capabilities through digital works; and fifth, a sense of the importance of the practice of translation, understood broadly as encompassing acts of “transduction, transposition, transliteration, transcription, transclusion, and the transformation we call morphing.”
From her first electronic poem, True North [hypertext], Stephanie Strickland has anchored her digital work in a visionary poetics for the electronic medium. When she set out to do the Storyspace interface for True North, published in 1998 by Eastgate Systems, she conceived of her poem as arising not from a two-dimensional grid outline of topics and subtopics but rather from the perspective of encountering an object in curved space. She wanted her words to inscribe arcs, suggest structured labyrinths, and offer glimpses of connection in virtual space. In this work we can see signs of the interests that will recur throughout her e-poetry.
In slippingglimpse, each of 10 ocean videos reads a poem text; the poem texts, in turn, each “read” image capture technology; and, completing the loop, video capture reads the water—as chreods, that is, as mathematical patterns by which dynamical systems return to their same flow, persisting through change. Regeneration of the screen returns random words, at random sizes, from the poem text at locations fully determined by the water’s motion, as if its motions were eyes scanning the text, bringing it along.
In one possible view, a slider scrolls the poem text either up or down at varying speeds, or pauses it, in conjunction with high-resolution video, thereby enabling several experiences of co-reading: 1-simultaneous reading and watching; 2-reading in concert with the non-human reader, the water; 3-reading and/or reading yourself reading the water reading; and 4-reader-specific multiple perceptions of movement vs. static text. Two purely video views additionally permit “brink” and decipherment and wholly graphic approaches to the poem’s words.
This work is indebted to Gregory Bateson, René Thom, and C.S. Peirce. The authors share Ollivier Dyens’s views of technological reality and Kenny Goldsmith’s and Dirk Vekemans’s senses of the need to relate to non-human algorithmic potential.
Pursuing one of the ideas outlined in Bacon’s Advancement of Learning (1605), a cohort of educational, religious, linguistic and philosophical reformers began seriously to pursue proposals for a new artificial-philosophical language from the late 1630s onwards. They were inspired by the ideographic accounts of Chinese characters given by returning Jesuit missionaries, and by the need to be able: i) to access and communicate reliable knowledge of the world, and ii) to be able to proselytise the inhabitants of the novo orbe of the Americas. These problems could be addressed together because, after Aristotleian and neo-Aristotelian thought, the human mind was taken to encounter nature in the same way the world over. This encounter took place visually; sight both enabled and vouchsafed the accurate representation of the order of things, whether through languages or other media. My paper will chart the fertile interpenetrations between the worlds of language, science, philosophy, natural theology and missionary religion as it emerged in the distinctive context of the pansophical reform projects that proliferated in England between 1640 and 1660, and which morphed into the modestly-dressed physico-theology of the early Royal Society. Authors considered will include Robert Boyle, Francis Lodwick, William Petty, Cave Beck, George Dalgarno, John Wilkins and Robert Hooke, along with a host of lesser-known intellectual lights.
This paper explores connections between practices of missionary linguistics in colonial America and the transatlantic quest to discover a universal language. Emerging around the time that Roger Williams published his Key into the Language of America (1643), universal language theory promised nothing short of the reversal of Babel through the return to an original language and the compilation of all the languages of the world into a universally recognizable system of characters. I argue that the search for a universal language influenced how Algonquian was written, recorded, and preserved, while missionary linguistics also shaped how the universal language was promoted and theorized from its inception in the teachings of Czech philosopher Johann Comenius to its revival by members of the American Philosophical Society in the 1820s. While numerous scholars write about the shifting image of the Native American as the ideology of biblical savagery gradually became a science of racism, missionary linguistics reflects a complex history of overlap between millennialism and empiricism and between theology and natural philosophy. Missionary linguistics created a sacred and primordial archive that was used by natural philosophers and ethnographers to promote a utopian vision of divine discovery through the recuperation of an original and perfect language.
This material is from Professor Bono’s forthcoming project on “The Word of God and the Languages of Man: Interpreting Nature in Early Modern Science and Medicine. Volume 2, England, 1640-1670.”
Few reputable literature scholars would rely solely on secondary materials like Cliff’s Notes (the literary equivalent of the gamer’s strategy guide) as stand-ins for close readings of primary texts. To thoroughly perform close readings of playable digital fictions that increasingly require a diverse range of activities such as cryptanalysis, “twitch” motor responses, and travel to real-world physical spaces as key elements of the narrative, the digital games scholar has little choice if they want to see the unfolding narrative through to its end.
This paper considers the increasingly murky line between cheating and close reading in light of developments such as alternate reality games, which invite the player to examine the underlying machine code of the text as a requirement of game play, and attempts to examine source code to find “cheat codes” in console games. I argue that any close reading of a digital game must necessarily consider all accessible elements of the game, including the underlying code, for two reasons: First, by examining the code we can facilitate progress through the narrative in its entirety. And, second, this allows for readings that are compatible with the traditional idea that a close reading should present not simply the “best” reading of a text, but all possible readings.
Drawing on contemporary discussions of close reading within new media, I present a methodology for game analysis that attempts to delimit the practical and ethical boundaries of such readings and identify the potential problems presented by altering the essential mechanics of play in the interest of thorough analysis.
This presentation discusses games and cognitive science as they are expressed in a particular live, interactive game called the MINDful Play Environment (MPE). MPE, an acronym for Motion-tracking, INteractive Deliberation, uses motion tracking technology and media objects like video, animation, music, lights, and spoken word to foster intensive physical movement and creative problem-solving and collaboration among users.
MPE is the product of Corporeal Poetics—artists Dene Grigar and Steve Gibson and artist-engineer Will Bauer. It will be a featured exhibit at the Oregon Museum of Science and Industry (OMSI) in the fall 2007. Created from the engine that produced Gibson’s successful music and light installation, Virtual DJ, MPE creates a virtual reality experience where three players produce a collaborative multimedia art installation—comprised of light, music, spoken word, video, and animation—on the fly in real-time, with the help of motion tracking and webcam technologies. Thus, to play the game, players must engage vision, hearing, and touch in purposeful action leading to this goal.
The mindfulness suggested in the game’s title refers to Francisco Varela et al’s concept of “mindfulness,” or the “embodied everyday experience” whereby “the mind [is led] back from its theories and preoccupations, back from the abstract attitude, to the situation of one’s experience itself.” Cognition, from this perspective, is inextricably linked to “embodied action[,] . . . the kinds of experience that come from having a body with various sensorimotor capacities” as well as the way “individual sensorimotor capacities are themselves embedded in a more encompassing biological, psychological, and cultural context” (173). Citing the way an athlete or musician pulls together mind and body into focused action, they suggest that the practice of mindfulness does not take the person out of the body but rather places attention on the entire aspect of one’s “presence” in order to reconnect the person to “their very experience” of living (25). Such mindfulness has the potential of preparing individuals “to handle . . . mind in personal and interpersonal situations” (22).
The presenter begins with a discussion of mindfulness and its relationship to cognitive science as suggested by Varela et al. She, then, moves to a demonstration of the MINDful Play Environment, highlighting its structure and the way in which players interact in it. Footage from the OMSI exhibition will be used as documentation. She ends her presentation with speculation about the development of future games involving this focus on sensorimotor and mindful engagement. With the introduction of such highly physical games like Dance Dance Revolution and environments like Wii such explorations of play environments, cognition, and embodiment may be of interest for both designers and scientists involved in the development of serious—or mindful—games.
Project Website: http://www.nouspace.net/dene/mpe/mindful.html
Thomas Vander Wal [Roush, 2005] describes the always on media landscape of personal communication, social networking, entertainment, gaming and news as the “info cloud.” Game play, one facet of the “info cloud,” temporarily monopolizes a player’s time, physical engagement, cognition, and even identity. How best to understand, describe and explain the manner in which people navigate, and cross boundaries between the “half-real” worlds of gaming, social networking, while maintaining a sense of identity?
Border/boundary theory seeks to explain the transitions and balance between the domains of work, family and “third places” and may be a useful analytical tool to apply to the “info cloud” or gaming. The work of other authors on games [Bateson 1972: Gee 2003: Goffman 1974: Jenkins 2004: Juul 2005: Salen & Zimmerman 2003 etc.] point to the limitations of border/boundaries theories as currently formulated to adequately explain the dynamic of immersion in game play or the “info cloud.” Anthony Giddens (1991) looks at an even broader context where “modernity radically alters the nature of day-to-day social life and affects the most personal aspects of our experience.” Giddens’ use of the term Umwelt points to an alternative understanding of the “info cloud.” Borrowed from ethology (Goffman, 1971) the Umwelt stands for “a phenomenal world with which the individual is routinely ‘in touch’ in respect of potential dangers and alarms,” while establishing a risk free zone, a “protective cocoon” of “normalcy.” Adopting further concepts from game studies gives border/boundary theories more powerful tools to analyze the “info cloud.”
In his afterword to his play, Copenhagen, Michael Frayn suggests that Heisenberg’s Uncertainty Principle ought perhaps better be referred to as a principle of “indeterminacy” or “indeterminability.” According to this principle, formulated in 1926, it is only ever possible to know either the path or the location of an electron, never both at the same time. This creates a problem for meaning, since it indisputably disturbs a Platonic model of oppositions as its basis. As science writer Dennis Overbye shows, in the world of the very small, all pairs—waves and particles, position and momentum, energy and time—are incompatible. Because any measurement of one means that the other will be disturbed, one could say about things on the smallest scale that, on the model of Schroedinger’s cat, they must be regarded as being everywhere and nowhere at the same time.
If this is the case, then one could perhaps rename Heisenberg’s discovery the “Undecidability” Principle. In this paper, I want to suggest that the apparent nonsense of the physics of the very small is fortuitous for poststructural thinkers. Central to the philosophy of Derrida and Deleuze, as their more careful commentators have pointed out, is the concept of singularity, which, by positing an undecidability between apparent opposites, rethinks the logic of noncontradiction. Singularity can be understood, I want to suggest, on the model of quantum mechanics, not as part of an alternate universe, but as an additional one, occurring to the side of the one with which we are familiar. By thinking the singular through some of the unlikely tenets of quantum theory, poststructuralism can answer the charge that its contributions, insofar as they are difficult to think, are purely discursive and have no bearing on the physical world.
On 8 January, 1918, Paul Engelmann wrote to Wittgenstein about a troubling observation he had made at their last meeting: “It seemed to me as if you—in contrast to the time you spent in Olmutz, where I had not thought so—had no faith.”
“It is true,” Wittgenstein replied, only “the difference between myself as I am now and as I was. . .is that I am now slightly more decent. By this I mean that I am slightly clearer in my own mind about my lack of decency.” Wittgenstein then proceeds to give an unsettling name to his particular practice of philosophizing: “If you tell me now I have no faith, you are perfectly right, only I did not have it before either. It is plain, isn’t it, that when a man wants, as it were, to invent a machine for becoming decent, such a man has no faith.”
Philosophy, for Wittgenstein, is man-made, a contraption, a device—but one not designed for revealing the truth about the world. It is not, critically, a science, in that it cannot give the appearance of making or stating discoveries. A philosophical proposition is not rescued by subjecting it to objective tests; under logical scrutiny, the whole idea of ‘testing’ falls apart. Wittgenstein’s modest, radical claim was that strictly speaking philosophy is a procedure, which if designed carefully enough, can help a man who chooses to undergo it accomplish some difficult task. Becoming ‘decent,’ for instance.
This proposal is an extension of a trend promoted by a variety of contemporary writers such as including N. Katherine Hayles in The Cosmic Web, Susan Strehle in Fiction in the Quantum Universe, Leonard Shlain in Art and Physics, Arthur Miller in Einstein/Picasso, and Pierre Francastel in Art and Technology in the Nineteenth and Twentieth Century. Hayles, Miller, and their colleagues promote the idea that artists either consciously incorporate scientific themes into their art or do so subconsciously while focusing on other things such as style or technique.
In contrast, the present proposal reverses that emphasis and looks at science as a way to determine why certain artistic artifacts are considered beautiful while others are not. The proposal examines the traditional aesthetic theory of Kant [Kant’s epistemological/aesthetic metanarrative] and questions whether there exists scientific support for his abstract ideas.
The theme of the conference, “The CODE,” is illustrated by the underlying assumption of the presentation, that is, that the metanarratives of the Enlightenment are valid representations of the Code and, in fact, represent the Enlightenment’s intuitive understanding of unifying scientific law-like structures that are today being demonstrated by scientific observation. The proposal unites the epistemological/aesthetic ideas of Kant with the ideas of postmodern thinkers [at least as they relate to cultural determinism] and rescues the concept of the metanarrative without denying its culturally dependent nature. The metanarrative is rescued by submitting it to objective tests [in this case the tests represented by scientific aesthetics].
The Lovers: two computers, connected by a network cable, exchange “classic romantic poetry.” One infects the other with a virus and the exchange continues until communication breaks down. Sneha’ Stolankis’ installation is just one of many contemporary digital works which plays upon the conceit of the biological. Several other new media texts, such as Jason Nelson’s Dreamaphage and Melinda Rackham and Damien Everett’s carrier (becoming symborg), similarly utilize the exchange between the biological and the digital.
For the purposes of this presentation, I am interested in what happens to this play when it is translated into print. Koji Suzuki’s Ring trilogy strategically employs the viral in a number of ways: as viral video, as a deadly biological virus, and in the final twist, as an uncanny feedback loop of the digital and the biological. The multiplicity of the virus is always haunting, resulting in an effect I term the “electro-spectral.” The electro-spectral indicates technologies of reproduction (media, the viral, simulation) that are received culturally as ghosts, disembodiment, death, and other forms of haunting, including traces of the uncanny and articulations of the sublime. In this paper, I propose to explore the ways in which the viral results in slippages in the processes of reproduction not only through its ability to self-propagate, but also in the ways it utilizes electronic media, the female reproductive system, and virtuality in the processes of mutation and becoming.
Whether written or visual, conventional art forms function through linguistic signification and spectatorial illusionism. Complicating artistic and hermeneutic processes, code in art operates through embodied materialism. Most frequently we encounter art through reading and writing, but code evokes the third, more radical file permission by engaging us at the level of execution. Artists may write code, but their artistic product does not describe, it does. When code executes, material change occurs in the world. Bits flip, charges swap, electrons fly. Both Kittler and Hayles have asserted that strident materiality on one hand, and the facility for transubstantiation on the other, are code’s defining features. How, then, can artists use code to make meaning?
While akin to conceptual art and performance art, as an artistic medium code is distinct in its manner of representation. Traditional art – for reading and writing – masks the contingency of its structure by keeping the arbitrary semiotic relationship undisclosed; meaning in new media art – art to be executed – depends upon explicit, non-arbitrary connections between parts. As technologies, these new media artworks would cease to function were this connection severed; as representations, they would cease to convey meaning were it obscured. Analysis of select examples of new media art and new media practitioners’ accounts of the artistic process support this paper’s contention that a primary meaning to emerge from code’s materialism is the tenuousness of connectivity underlying technosocial experience. Rehearsing executions of connectedness exposes in each establishment of functional equivalence the material frailty of functionality.
This panel will consider some of the more nagging questions and persistent problems raised but unresolved by recent scholarship on embodiment (e.g. Hansen, Hayles, Massumi, Munster, Sobchack). Literally meaning “putting into a body from without,” em-bodiment necessarily operates against and through materials that are not its own. Embodiment requires historical contexts for its actualization. Experiences, performances, and concepts of embodiment derive from already historical, marked, contingent bodies. This year’s conference theme “code” – often suggesting the transformation and re-inscription of existing bodies from one medium into another – reminds us that disembodiment occupies a prominent place within articulations of embodiment. For these and other reasons, our panel will consider whether there can be a meaningful notion of embodiment without something “against embodiment.”
Although recent attempts to place George Eliot’s fiction within the political and epistemological contexts of nineteenth-century culture have yielded many valuable new insights, they almost invariably lay stress upon Eliot’s response to medical models of knowledge. Drawing upon recent breakthroughs in biology and physiology, such criticism suggests that the intellectual patterns of her fiction proceed from—and actively partake in—the gradual intensification of bodily knowledge over the course of the century. In this manner, Eliot’s novels are aligned with an incipient regime of biopolitical power: a new form of authority that made the regulation of the human body central to subjectivity itself.
It seems readily apparent that Eliot’s fiction is suffused with biological and physiological analogies to culture and personal identity; and the incipience of a new mode of authority in the period, which made regulation of the reproductive body central to subjugation (as argued famously by Michel Foucault), seems equally incontestable. But the characteristic aspects of the changes that Foucault theorized can only be outlined in a qualified sense within the conventions of mid-Victorian science, particularly in the ways that Eliot understood and articulated those conventions. Accordingly, this paper provides a somewhat counterintuitive reading of Middlemarch (1874). In her great masterpiece of realist fiction, I suggest that Eliot enlists recent discoveries in physics (particularly thermodynamics) in ways which actually unsettle the apparent precedence of biology in the novel. In particular, in the narrative of her scientific hero, Tertius Lydgate (who wishes to “surpass…the limits of physiology”), Eliot uses contemporary physics to emphasize the jagged and inherently uneven route towards scientific knowledge—a lesson from which recent scholarship might ultimately benefit.
French novelist Michel Houllebecq’s controversial third novel The Possibility of an Island performs, among other provocative stunts, a fierce cultural analysis of contemporary sexual relationships, a scathing examination of the current posthuman trajectory and, by novel’s end, a rant against the “nostalgia of desire” for human connection in a digital age. The narrator Daniel, cynical and increasingly in despair, has a gratifying relationship with Isabelle, who works for a magazine dedicated to defining the female body as a nubile cyber-preteen. The relationship ends in their forties when things begin to sag and neither can handle the other’s bodily collapse. Daniel then hooks up with a younger woman, but the older he gets the less he can bear her self-absorption. Invited to join a religious-scientific cult, the Elohimites, he’s promised eternal life through cloning after death and a “downloaded” old identity, via artificial neurological circuits, into newly reconstituted bodies. These “neo-humans,” hoping to shake off what Houellebecq sees as humanity’s obsession with sex and its propensity for cruelty and violence, spend their spare time exchanging e-mails, free from suffering if also pleasure.
I’ll consider Houllebecq’s novel—whose humans are as highly sexed as they are cloned (or dead) by novel’s end—in light of N. Katherine Hayles’ and others’ analyses of post-human embodiment. Deploying Richard Powers’ (less dystopian) incorporation of neuroscience and Mark Hansen and Anna Munster’s materialist arguments concerning digitalized consciousness, I argue that we rethink Houllebecq’s horrific (when not just existentially fraught) imaginary. He writes that “It is in our relations with other people that we gain a sense of ourselves; it’s that, pretty much, that makes relations with other people unbearable.” Posthuman potentialities confront such anxiety, as it were, head on, but whether there can be a “meaningful notion of embodiment without something ‘against embodiment’” has yet to be deciphered.
In Gilles Deleuze’s philosophical arsenal of imaginative concepts, the most underestimated and unexplored is the notion of incarnation. The use of incarnation by Deleuze is interesting because of his staunch materialist theories; incarnation’s divine and idealist connotations have no place in his philosophy. Rather, the strict sense of in-carnation, the coming, descending into meat, is considered. If the religious aspect of the word or the platonic model of idealism do not enter the equation, what is the purpose of this concept of incarnation, coming into meat?
For Deleuze, incarnation is active in the domain of art and aesthetics. He first isolates this notion in his book on A la recherche du temps perdu, Proust and Signs. There, he uses the notion of incarnation to describe how the idea of love can be embodied within literature in a network of signs. Years later, the notion of incarnation comes back into his writings: this time, he uses the term to map out a materiality of art which embodies sensation. In his book on Francis Bacon, Logic of Sensation, Deleuze demonstrates exactly how the word incarnation sheds its divine sense and enters the world of matter through the analysis of crucifixion paintings that depict figurative images writhing from realism into abstraction. When confronted with these images, we start to understand the full importance of incarnation for Deleuze: what is in-carnated, embodied and given a form is the unrepresentable, unintelligible image the mind makes of itself. When the mind cannot fully grasp a concept, chimeras fly over the landscape. In the time of great ocean voyages and world conquest, when the world was not fully explored, monsters were drawn on maps in regions not yet covered by explorations. This is the purpose of Bacon’s Three studies of a Crucifixion, March 1962 in depicting furies through a hybrid technique of abstraction and figuration: to try and capture what the mind cannot fully represent to itself. Bacon paints these three furies from the perspective of an eye that cannot encompass their full shape. Like an eye that wants to see the front but also the back at the very same time. It becomes apparent that what is incarnated is not something divine, but the process of the idea.
A variation on the old philosophical device of analogy, incarnation does not function through exchange but through descent. By exploring paintings from the Renaissance depicting crucifixion and comparing them to Bacon’s work can we map out this downward direction of incarnation on the surface of the paintings. And by comparing visual aesthetics to the notion of love in Proust, we can capture a concept of embodiment that is not simply metaphorical but material insofar as it gives shape to thought.
It is through the tension that arises between love and crucifixion, between Proust and Bacon, between literature and painting that a material aesthetic of incarnation will be fleshed out.
Photography: a visual language that appears to resemble reality, based on observation from a distance.
Within the field of photographic processes, however, co-exists a very different representation: That of imprint and touch—namely the photogram. The decipherability of photographs to us is almost immediate, but photograms work on a different level; they encapsulate the meeting of material and light-sensitive surface, incorporate the mark of authenticity while producing an image, which may not be immediately ‘read’.
The photogram can be said to incorporate a code all of its own: that of making touch visible. As such it functions in the manner of Gilles Deleuze’s ‘fossil’ and Walter Benjamin’s ‘fetish’ through its power to bear witness of a tactile encounter with the original object.
Maurice Blanchot’s observation ‘The game of distance is the game of near and far’ forms the starting point of my enquiry into the fundamental difference between these two modes of representation,—in particular, the encoding of the aspect of touch. The sense of sight must have distance in order to function, thereby detaching the Observer from the Observed. Touch, on the other hand, needs closest proximity, physically uniting the Toucher and the Touched.
Through this tactile connection, imaging processes such as the photogram—and by extension the x-ray—challenge the Cartesian hierarchy, creating an order where spatial orientation becomes less important and the notion of haptic visuality is born.
The vocoder, or voice-coder, was built at Bell Telephone Laboratories in the late 1920s as a speech analyzer and synthesizer. Homer Dudley, the principal investigator, intended the vocoder to reduce the amount of bandwidth required for passing speech along telephone lines; it was not ultimately used for that purpose, due to the poor quality of the reconstructed voices. During World War II, vocoder technology was applied to speech encryption at the Labs; this “Project X” has received much attention from historians of cryptology as an origin point for digital communications.
I argue the indebtedness of this form of speech-abstraction to the body management strategies of turn-of-the-century disability researchers. Dudley credited Alexander Melville Bell, R.R. Riesz, and Sir Richard Paget as influences on the theory underlying his vocoder—namely that speech could be described in wholly material terms, that all human speech was speech synthesis, that some aspects of the human voice were inherently “telegraphic,” and that “the basic difference of the past, i.e., that of audible versus visible material, is losing much of its significance as new circuits are developed to print the spoken word automatically and also to speak the printed word.” Melville Bell had famously devised a “physiological alphabet” for deaf oral speakers, while Riesz produced a series of laryngeal prosthetics at Western Electric. Paget was President of the British “Deaf and Dumb Society” in the early twentieth century, and created some of the first modern “talkers,” based on a dubious comparison between sign language and the innateness of “mouth gestures.” Through this genealogy, I question the constraints on “universal communication.”
The early history of AI is more engaged with question of emotion than many commentators have assumed; the amplification and management of the affects is evident in a number of important early AI texts. This paper is part of a larger project that explores the unorthodox relations between the artificial and the affective in early AI.
This paper examines how affect was managed (inhibited) in the work and the research milieu of Walter Pitts. Focusing on his canonical paper with Warren McCulloch on the logical calculus of neural nets, this paper searches for the psychological presumptions that inform the 1943 paper: what kind of theory of mind does the paper perform? What kinds of calculating machines did it engender? I argue that affect has been devalued as an object of inquiry in the 1943 paper, and inhibited as an epistemological force in the research environment that generated this work. This paper pursues the powerful effects of such affective configurations. While overt reference to affectivity is absent from most of Pitts’ writing, the forces of affectivity are still to be found in and around this work. No less powerful for having been avoided, the affects gave shape to how Pitts (and then the rest of us) came to imagine computational bodies and minds.
Post-WWII American and British military and scientific growth created a need for new ways to organize and disseminate technical reports. The Cambridge Language Research Unit, a machine translation working group funded by American military grants, developed a unique theory of language to aid in the automatic translation and retrieval of scientific documents from large international databases. The group envisioned language as a code in which trans-linguistic—that is, universal—scientific concepts are embedded. This image of language led the group’s more ambitious members to re-conceptualize scientific growth and change as inherently linguistic. In other words, a theory of language-as-an-encoding-of-concepts became a theory of scientific-discovery-as-linguistic-change. My paper shows how this idiosyncratic translation of a philosophy of language into a linguistic reconceptualization of science directly influenced Thomas Kuhn, Mary Hesse, and their contributions to the “linguistic turn” of the 1960s.
Originating in a theoretical iteration of the (ontic) irreality of significance (i.e. of the transcendental unity of all cryptological “objects”)—that is: originating in a theoretical iteration of the (grammatologically) centrifugal experience/dissolution (sic) of (regulative) Reason as a specific phantasy (i.e. the (grammatologically) centrifugal experience (sic) of (constitutive) reason’s hingeless—dialectical transformation into an anti-Husserlian essence; the chiasmus (sic) of text text understood as a “medium/content” hierarchy)—this paper’s (speakable) trajectory will pass through a (grammatologically) centripetal perpetual-vanishing of the transcendent norms of media theory per se and—in the midst of the concomitant dis/appearance of all cryptological “objects”—end classical—dialectically in thrall to that perpetual-vanishing’s functional recuperate: that Cipher whose target is the as it were systematically occulted (ontic) irreality of significance; a Cipher then whose ontology must vary inversely as its decipherment (sic): a Cipher ontologically the plastic of its own (grammatologically) centripetal perpetual-vanishing.
Given a definition of ethicality as (that which specifically cannot be thought as) the (grammatologically) centrifugal perpetual-(re)turning of materiality such a Cipher must clearly be accounted a non-ethical (since still specifically normative) appearing “of” ethicality—that is: the normative/non-ethical/ideological—obstructive “in” ethicality. As such it must stand further as materiality’s lone plastic. — Now it is eagerly to be hoped that on this basis a praxical ethics may somehow be generated. This paper will conclude by validating this hope and adumbrating the now-linable research by which alone it may be solved.
The paginated form of the codex book implies a visual division of information that has had important repercussions in the sciences, arts, and literature. This panel proposes to explore the historical implications of the page as form. It will focus on the changing epistemological and aesthetic features of the page in the period following the invention of printing in Europe, emphasizing problems of information storage, organization, and display.
In the seventeenth and eighteenth centuries in Europe, historians experimented with many new forms of information design. In particular, they developed a variety of new graphic interfaces for chronological information. Chief among these were the numerical table and the graphic timeline. The chronological data table, though more than a millennium old, took on new importance after the invention of the printed book. The codex lent itself particularly well to this form of information organization, and after 1500, dozens of new works appeared reinterpreting the old manuscript system. But by the late seventeenth century, historians had grown frustrated with these proliferating tabular compendia. The late seventeenth century saw important efforts to reimagine the printed page through experiments in structure, size, and visual design. The printed page was shrunk, stretched, and manipulated. Eventually, questions were raised about the use of the codex itself as a primary mechanism for representing chronology. By the middle of the eighteenth century the scroll had reemerged as an important competing form in this area of print culture. Ironically, it was the scroll, rather than the codex, which pointed forward in this area of information design.
The pages of Pierre Bayle’s Dictionnaire historique et critique (1697) encode a complex relationship among the spheres of textuality, typography, history, and philosophy. A massive compendium of histories, biographies, commentaries, and philosophical arguments, this work has a multilayered philosophical structure that proceeds by way of citations, remarks on citations, citation of remarks, as well as citation of citation and its relationship to mis-citation. This philosophical and argumentative structure is materialized through a correspondingly variegated mise on page, with multiple layers of marginal notes and footnotes supplementing a structure of folio columnar remarks to a ‘primary’ text that already gestures outside of itself to the maelstrom of texts circulating in the Republic of Letters. In this paper, I propose that attention to the relationship between this typographical instantiation of the Dictionnaire and its philosophical strategies are a productive way to think about the material history of the production and circulation of philosophical discourse at the end of the early modern period and the dawn of the Enlightenment. It offers, that is, a way to think about the nature, structure, and encoded materiality of the philosophical page.
Renaissance emblem books have always been sites where readers have had to negotiate multiple layers of significance and intersecting semiotic codes. In this paper, I am going to explore how one such book—George Wither’s Collection of Emblems—employed two radial devices known as volvells in order to structure readers’ aleatory encounters with its synoptic contents. By examining how this book was used as both an object and instrument, we can begin to understand how Renaissance readers were trying to fold the complexity experience into the technologies of the codex, symbol and printed page.
In this panel, we will discuss how codes borrowed from genetics, chemistry, biology, and mathematics may transform the structure or expand the referential potential of poetry; we will also show how poetry can undo and remake the codes that it integrates. In examining a number of poetics form—in poems by ourselves and by others—we will speculate how scientific and mathematical codes are linked to wider systems of representation when they appear in poetry, as well as the ways they connect to broader systems of social critique.
This paper will explore the relationship of Marianne Moore’s poetry to early twentieth century zoological codes. Because of her own rigorous scientific education and her heavy use of museum materials, Moore’s interest in and knowledge of the methods of zoological systematization was profound. Her insistence on the use of the “type species”—a system of thought and reference about species, in which, as Lorraine Daston puts it, “code articles” become “applied metaphysics”—is important to understanding how she figured and remade the image of the animal in relation to the human. Insofar as Moore used the zoological code for “type specimens”—the jerboa rather than a jerboa, for example—to argue that cultural differences between humans should be acknowledged as politically salutary, she manifested a biologically essentialist read of those differences. But insofar as those same animal types were complicated by their representation as individualized, cyborgian models of efficiency and beauty, Moore remade zoological codes as a heuristic stay against the homogenizing forces of capitalism.
Through precise and shifting mathematical codes, Inger Christensen’s IT expands out from the syllable into to a world as complex as it is on the verge of dissolution. Published in Denmark 1969 and embraced equally by political protesters and politicians, IT feels especially important today. From the perspective of code- tracing and making poems, my essay considers ways in which syllables collide and accumulate, according to math and to chance. Can the path be traced from “it” to epic? How does one make a moving force and then how tightly must the pieces fit to keep creation afloat?
Paranoia is a condition of assuming the world is rigidly encoded; the paranoid loses his sense of humor, since the coding of humor is temporary, flexible, and social. The code which cannot be broken is the most intriguing of all; that which cannot be read becomes its own decoding, hence its meaning. I intend to consider in order to reject first the flaccid, uninteresting, intentional use of borrowed mystery in poetry, imitating the cachet of the mysterious look and sound of terms and formulae, then consider the use of coded elements which remain unreadable and yet “mean” by that very fact. I intend to look at some of my own writing as well as poems by Jena Osman and Cole Swensen. The etymology of code (codex, caudex, trunk of a tree, wooden tablet, book, code of laws), figures also in this thinking.
My poem “Desequencer” takes the sequence of the human genome for its determining principle, substituting for “form” the term “code.” Just as the information contained in any DNA sequence is—in its abstracted, sequential form—incapable of specifying the complicated processes of transcription and expression that produce proteins, “Desequencer” expresses (or translates) given DNA sequences according to shifting, even capricious procedures. In this, the poem’s argument with ideological uses of genetic science figures its own creativity as analogous to the minimum degree of agency within a deterministic structure, whether this structure is biological, discursive, psychoanalytic or economic. In addition, as much as the poem is an argument with biological determinism, it is in dialogue with certain strands of structuralism and post-structuralism that, in their most unsubtle forms, imagine the subject as an ensemble of aftereffects produced by a “symbolic order” (Lacan) or “ideological state apparatusses” (Althusser). In the last thirty years, reception of these ideas by artist and writers occurred alongside a turn to procedural or process-oriented art and writing—in language writing, minimalist sculpture, structuralist film, conceptual art, OuLiPo—that aimed to materialize and then deconstruct these obscured symbolic orders. More recently, an expressivist turn among poets indebted to these earlier modes—Lisa Robertson, Kasey Mohammed, and Erin Mouré, for example—has shifted energies away from the activity of “laying bare” these codes to the invention of novel ways of actualizing them: that is, the elaboration of a micropoetics. My paper will historicize this shift and its philosophical stakes while discussing “Desequencer.”
Jeremy Narby, a doctor in anthropology of Stanford University is working as Amazonian projects director. His books originally in French are translated in English: The Cosmic Serpent: DNA and the Origins of Knowledge (1998), Shamans Through Time: 500 Years on the Path to Knowledge (2001), Intelligence in Nature (2005).
Narby claims to have discovered a new signification to the genetic code from the hallucinations of shamans—and his own ones—with the ayahuasca, a psychoactive infusion largely used in Amazonia and especialy linked with shamanic religions. According to him, these Shamans, by this hallucinatory way, could obtain a bio-molecular universal knowledge which would reach the structure of the DNA. Shamans would access an intelligence, which they say is nature’s, and which would give them information in a close correspondence with molecular biology about the pharmacology of the plants. One of the most important images during the hallucination is the “Cosmic Serpent,” which for the author means more than a resemblance with what seems like the double helix of the DNA. Stretching a link between molecular biology and the knowledge of shamanism through hallucination, the anthropologist underlines analogies, correspondences between DNA and “animated essences” common to any forms of life which appear in the hallucinated way of knowledge. Between hypothesis and extrapolation, between anthropology, neurology, molecular biology, he gives a contribution to the imaginary of the genetic code, with a finalist and vitaliste conception of the genetic code.
But this extrapolation linking mythological shamanic conceptions and genetic code became influential in the contemporary imaginary, as in the numerous sites of internet about the theme, or in literature with the novel Babylon Babies (Paris, Gallimard, 1999) of the French “posthuman” writer Georges Dantec. A part of the novel re-writes Narby’s idea of a correspondence between the “Book of life” of the genetic code, the twisted serpents and the shamanic knowledge through hallucinations. The success of the novel contributed to the extension of Narby’s ideas.
Like the link described in Lily E. Kay’s book, between the genetic code and the I Ching, the hypothesis of Jeremy Narby owns to these numerous extrapolations which constitute the contemporary imagery of the genetic code.
Eighteenth-century preformation theories seem so naïve in comparison with contemporary biology, especially when we consider the sophistication of the genetic code. But one should not be too quick to dismiss eighteenth-century early proto-biology. The eighteenth century was a period rich with puzzle-solving activity in reproduction and generation. Amid this scientific pluralism, the work of the Swiss naturalist Charles Bonnet (1720-93) comes closest to articulating the new direction biology was to take. This may seem contradictory in light of his description of preformation, a theory whereby all potential forms are contained within a single germ. Yet as Thomas Hankins rightfully pointed out, his revival of preformation strangely prefigures our contemporary theories in genetics.
Seeing beyond what microscopes revealed, Charles Bonnet theorized a secret mechanism lying at the heart of reproduction. Like a lock’s secret mechanism, the puzzle of reproduction could never be seen, yet was essential to the reproduction of species. Bonnet clearly identified this secret mechanism as a necessary abstraction, a necessary code.
My talk contextualizes Bonnet’s work in the puzzle-solving activity of eighteenth-century theories on reproduction. My close reading of Bonnet’s work reveals, not only how his secret code prefigures biology, as Thomas Hankins’s helpful rereading of Bonnet has shown; but also how Bonnet applied a similar theory to the field of psychology, where the code now revealed desire. Indeed Bonnet’s genius lay in his ability to locate and translate the code, to abstract and interpret from what he saw.
Gregor Mendel’s historic discovery of dominant and recessive traits led to the formation of modern genetics. Within this scientific paradigm, visible evidence was no longer proof positive of DNA structure and its physical expression. Rather, genetic influences could pass unseen and unknown from one generation to the next, emerging as patterns within larger hereditary networks. My paper explores the recessive trait as a key formation for the science of genetics and, by extension, for the systems of social organization and control that have arisen around this coded vision of life. Within genetics, the recessive has been an important nexus for the study of race and populations, with ramifications in fields ranging from medicine to political theory to eugenics. The recessive trait is framed as a dangerous secret within the genetic code that must be detected, deciphered and deleted for the well-being of both individual and social bodies. As such, this concept of the recessive serves as a useful site at which to uncover and re-evaluate the relation between the private and the public. Just as the secret is posed as the limit of the public and as that which is thus constitutive of private life, recessive traits are secrets encoded in the fabric of life itself. Mediated by and inscribed in the body, they illustrate the problems of knowledge within the social body surrounding the play between the visible and the invisible, the normal and the pathological, and the contingent and the controlled.
“How does the body see itself?” is the question posed by Stelarc, a performance artist who allows his own body to be manipulated and controlled by machine. Stelarc’s work, like much art focusing on the postmodern, extended, decentralized body, focuses on the extensions—on external technology and how our bodies interact with it—rather than on our bodies in isolation as systems of biological relationships. But the biological body itself has been decentralized and expanded, and is thus increasingly incomprehensible to us. New scientific knowledge forces us to think about the body in new ways, but they are ways to which most of us don’t have access. DNA sequences, for instance, are a meaningless code to us as thinking beings; we cannot recognize an individual by looking at their code. By contrast, descriptive anatomy of the medieval and early modern periods analyzed the body literally as a microcosmic echo of the larger universe. Pre-modern anatomy contained narrative code about the body’s functions that allowed us to situate ourselves within the context of the macrocosm, itself allegorical in nature. It is this understanding that has had to give way to scientific understanding of biological functions. On this shifting ground, symbolic science—transcendental anatomy, so to speak—has been relegated to superstition or to poetic imagery; but that very shift has made the literary tradition of medieval natural history a deep and rich vein upon which to draw, to comment on our modern relationships to our body and to the larger world.
In the early months of 1946 the German Surrealist artist Max Ernst (1891-1976) and his new wife, the American painter Dorothea Tanning, settled into a new home in Sedona, Arizona. During the following six years Ernst created a series of tiny landscape paintings that he called “microbes.” Ernst’s time in the desert, and the way he investigated his surroundings through these small but potent artworks, are the focus of the paper I would like to deliver at the 21st Annual SLSA Conference. Within the miniscule format of his “microbes” Ernst manages to encode the specifics of the desert—its vast scale and vistas, its particular colors and textures, its abstract patterns. My paper seeks to investigate the codes embedded within this distinctive body of work, which includes translations from large to small, from three dimensions to two, from real to artistic vision, and from nature to culture.
In my presentation I will argue that Max Ernst was, in many ways, uniquely qualified to appreciate, absorb and reflect the culture and landscape of the American Southwest. From his early interest in collecting Kachina figures and Northwest tribal art (an interest shared with many Surrealist artists), to his continued pursuit of capturing reality through abstraction in his paintings, Ernst showed a long-lasting interest in the spaces, places and peoples of the American Southwest. Through an investigation of Ernst’s “microbe” paintings I would like to delve into the ways that he scientifically reco(r)ded and artistically represented the desert landscapes of the American Southwest.
Is our ecosphere being altered by Genetically Modified Organisms built for profit margins without authentic oversight or risk assessment? If the technology for genome sculpting of new style humans is a possibility, what, if any, effect will imagination play in our future kindred? What can we know about animal sentience and non-human awareness? How are artists taking these factors into account as they try to express themselves through living collage? As new biological comprehension sprouts new technological processes, what are the overt and covert roles of creativity on the decisions of which traits get embedded into whose new bodies? These are today’s major issues emanating from the intersection of Art and Biology.
Full Paper: http://www.ciac.ca/magazine/archives/no_23/en/dossier.htm
Bioart in Question: http://www.ciac.ca/magazine/archives/no_23/en/entrevue.htm
Video Document: Teaching Transgenic Embryology as Art, University of Leiden 2007: http://www.we-make-money-not-art.com/archives/cat_labs.php
In this panel each of the participants will discuss a particular form of reframing the genetic code, according to the different projects, but all based on cases in which (visual) artistic practices, a visualising of the code, plays a major role. Partly because of the different backgrounds of the panellists (art history, philosophy, medical biology, microbiology), translations and interpretations between systems of signification are bound to differentiate the meaning (or epistemological importance) of the code and of codes, sometimes in a multipartite way. This panel explores that field of continuous ‘re-writing’, ‘re-playing’, ‘re-imagining’, of juggling metaphors, and embodied information.
Cor van der Weele is a molecular geneticist and philosopher who worked extensively on metaphors. Anne Kienhuis is a molecular biologist and since February 2007 the scientific manager of The Center for Arts & Genomics (Leiden). For this panel they will outline the theoretical framework and hypotheses of their research, which is titled Imagining Genomics: introducing visuality in the genomics debate. The 1st hypothesis is that changing boundaries between traditional fields (art, science, philosophy, education) are associated with changing relations between words and images. The 2nd hypothesis, that builds on various theoretical leads, is that artistic imagery can contribute to the quality of public moral debate in genomics, from its specificity of the visual, because it can overcome dichotomies which tend to be very persistent in (verbal) philosophy and ethics. The emphasis will be on the various ways in which moral agendas are affected by visual art. One of the theoretical perspectives is an approach in cognitive psychology, called ‘dual coding theory’ (Paivo, Sadoski and Paivo) from which we learn that words and images address differently, representing different (sequential and non-sequential) systems. Alongside the analysis of existing practices, the project will also develop an experimental case in creating of its own: a collaboration project between an artist and scientist, not only to study the process involved, but also to study the workings and effects of the resulting exhibition (to be held at the natural science museum Naturalis in Leiden) that will evolve from the collaboration on the (moral) debate.
As a medical biologist and philosopher Ellen ter Gast is particularly interested in the gut responses to the visual images that surround biotechnology. Those responses dwell in the extremes of monstrosity and beauty, specifically in the case she is working on: the boundary work done by mouse biotechnologists and the images that emerge thereof. If our moral judgement about human and mouse biotechnology is preceded by an aesthetic judgement, does this also imply that aesthetics is a legitimate way to make a moral assessment of the future of human biotechnology? Is this purely a matter of taste or can something of general moral relevance be derived from the aesthetic nature of our moral judgements?
Art historian Danielle Hofmans is working on the specific artistic traditions (so-called) bioartists are working from. In most sci-art research not much attention is focussed on that background, assuming bioartists create a new in-between, or superseding field, in which artistic traditions seem to have no significance at all. Most artists themselves are not eager to pursue this question. Until now, It proved difficult to establish which concepts, metaphors, theories or methodologies are ‘useful’ to describe and understand the practice of bioartists, as most research on this topic is homing in on the science involved. Instead of asking herself how scientifically apt the artistic work is, Danielle Hofmans understands the work as a specific re-coding in the field of modern and contemporary art.
Miriam van Rijsingen will discuss the spatial instances of the genetic code in visual art. As an art historian she is especially interested in performativity in art and embodied knowledge that is ‘attached’ to it. Visualisation of the genetic code, in images and material installations, as we encounter them in interactive artworks, provide a spatial environment in which the genetic code becomes embodied. Main question is: How should we understand that space/environment in relation to the code? One of the ultimate questions under consideration is whether and how embodied knowledge about the genetic code can bounce back to science.
In 1917 Margaret Sanger launched The Birth Control Review, which for the next twenty years would publish articles and stories, often with a eugenicist slant, about contraception and women’s sexuality. Starting with the Review’s fiction my paper seeks to trace a genealogy of how juridical codes aimed at (white) women’s reproductive capacities shifted beginning with the emergence of the Black Codes in the South and then the failure of American Reconstruction. (Interestingly, before abortion was outlawed, mostly white, middle-class women sought abortions.) In the aftermath of the Civil War, a shift had to occur to discipline white women’s bodies not to reproduce with freed black men, men who were now legally recognized as fully human. From 1865 to 1876, as the Freeman Bureau was instituting policies to aid emancipated blacks and failing miserably within a politically contentious U.S., the American Medical Association and social reformers were striving to outlaw any form of abortion and women’s access to contraceptives. My paper argues that abortion became criminalized at precisely the moment when the state began to fear what white women might generate if their bodies were not controlled and confined. This turn required a shift in the norms (laws) that encode gendered bodies, particularly those codes embedded in reproductive options, which until the late nineteenth century contained no abortion taboos. While Sanger’s eugenicist birth control politics seem to jarringly rub against some of the African American fiction she published, the two discourses both demonstrate how technologies of sex, technologies that shape the forming bourgeois subject, are intricately connected to technologies of race and laws that encode blood and nation.
This paper seeks to crack the code of familial language in American politics. American politicians obsessively explain and justify their policy decisions in terms of language about family. This paper explores the way that ideas about family interact with citizens’ political predispositions, creating the environment for political elites’ family talk to either succeed or fail.
The paper shows that it is possible to identify a sort of neo-authoritarianism in American attitudes. This neo-authoritarianism is associated with the very popular idea that having and raising children is the best way to have a fulfilling or meaningful life. The paper examines the extent to which people agree that raising and caring for children is what makes life meaningful, and the political attitudes associated with this belief. The paper shows that most Americans consider having children crucial to leading a fulfilling life. The paper goes on to show that this belief is associated with lower feelings of social trust, less warmth toward racial outgroups, less critical attitudes toward the elites running major American institutions, and lower levels of political participation, even when controls are introduced for ideology, party identification, and a host of demographic variables. This cluster of attitudes bears a strong family resemblance to the original “authoritarian personality” conceived of by Adorno et al. The idea that having children makes life meaningful, however, does not sound authoritarian to the ear, and turns out to be largely distinct from authoritarianism when looked at empirically.
These panels will examine the contemporary state of Gaia theory discourse from two primary angles. The first panel will investigate theoretical developments in Gaian science: its links to systems science, its status in the mainstream geoscientific academy, and its contributions to the climate-change debate. The second panel will put Gaia theory into wider cultural perspective, by drawing out its rhetorical resources in several millennia of Western literature and science, and by marking its incentive for creative artistic responses. We hope to underscore the vitality of Gaian science—the challenges it poses and encounters at the cutting edge of our complex posthuman nonmodernity.
James Lovelock, the founder of the Gaia construct in modern biogeochemistry, has used eight metaphors other than Gaia for the whole of nature. Five of these additional images are older than Homer and include the music of the spheres, the fabric of life, nature as a tree, a machine, and a great flowing spring, all of which are common in European scientific parlance. These other holistic metaphors have never been attacked in any critical comments about Lovelock’s work and enhance rather than discourage acceptance of the Gaia image by main line scientists, because they are part of standard scientific language and ideology but these other metaphors do not motivate significant hypothesis making for Lovelock. Literary evidence suggest that Lovelock’s Gaia is not descended from the Hellenistic image of Mother Nature, but the ancient scientific macrocosmic–microcosmic analogy combined with the image of the active globe. The latter is helpful to mainline scientists, because it is a standard trope in modern biogeochemistry (and science fiction), whereas the macrocosm seems vitalistic both to the main line scientists who reject it, and to ecofeminists, new age religionists and other modern vitalists. Ironically, the macrocosmic analogy did not seem vitalistic to Lovelock because of his medical research experience. Thus, most of Lovelock’s rhetorical choices of whole-earth images have probably significantly aided his appeal to main line scientists, but they are not central to his scientific imagination, which is dominated by Gaia, which is not, in his mind, a personification, but a version of the geocosm, the self-managing planet.
I will discuss my own path to Gaia, and explain how Gaia Theory has come to inform my musical compositions. First I will situate Gaia Theory as a new manifestation of an old subterranean thread of history, from Lucretius to Leonardo, in which art and science were closely connected. I will extend that thread to Lovelock in our own time, and discuss how the new science could be used as the basis for a new aesthetics. I will then play excerpts from my oratorio Gaian Variations, and describe how the new science of Lovelock and Margulis influenced the work on several different levels.
How do we see ourselves as defined through biological codes (genomic science)? How do we re-define and read ourselves through literary codes (narrative structures)? In The Human Condition, Hannah Arendt considers the nature of life in terms of a relation between the biological and man-made aspects of human existence. She draws attention to the potentially catastrophic consequences of increasing human scientific and technical knowledge and asserts that the question of what to do with this knowledge not only scientific, but also political. According to Arendt, the human condition is also distinctly literary: “the realm of human affairs...consists of the web of human relationships wherever men live together...[this] web ‘produces’ stories with or without intention” (Arendt, 184).
My presentation first considers Arendt’s theory as a framework especially suited to reading Faustian narratives, which employ literary form to posit the relation between the biological and the manmade, think through advantages and limits of progress and invention, and consider the impact of technology on the “web of human relationships.” I then focus on Margaret Atwood’s vision of a post-apocalyptic world brought about by a Faustian figure in Oryx and Crake in order to think through how genomic science structures the way we see ourselves in the present and how we imagine our future. I argue that Oryx and Crake is a text that works through a “bioliterary” code to theorize the balance—and the consequences of imbalance—between the biological, technological and literary elements of the human condition.
Popular science writers face a challenge when trying to re-articulate or decode the language of science so that it becomes accessible to a non-specialized audience. In this paper I explore the work of several popular natural science writers, including David Quammen, Natalie Angier, and Jonathon Weiner, all of whom work to familiarize readers with subjects that may seem alien or frightening (beastly nature, disintegrating ecosystems), or familiar but still mysterious (the female human body). I draw on the rhetorical analysis of science writing of Jeanne Fahnstock, who examines the “genre shifts” that occur when scientific writing is translated to a non-specialized audience. Fahnstock notes that Aristotle distinguished three types of persuasive speech: forensic, deliberative, and epideictic. Scientific papers argue for the validity of the observations they report, and while they retain elements of the other two modes, they remain primarily forensic. Popular science writing is forced into a genre shift, becoming epideictic (i.e. celebratory) due to the nature of scientific knowledge and language and the knowledge base the writer shares with his or her non-specialist audience. Angier, Quammen and Weiner investigate science in ways that are both accurate and lyrical, bringing the reader to a better understanding of nature through a celebration of the science investigating it.
The homology Gregory Bateson draws between the processes of learning and species evolution in Mind and Nature specifies six criteria for mental processes. Among them is the assertion that within both types of “mind,” “the effects of difference are to be regarded as transforms (i.e., coded versions) of the difference which preceded them.” That perceptual processes in particular are, as a rule, initiated by the registration of external difference is an assumption inherent to the current understanding of sensory processing. If the transformed effects of stimuli difference amount to a kind of encoding at the threshold of mind, decoding them is a mirror (in the sense of reverse) dynamic in which transformed differences are newly distinguished within another systemic whole. This paper interprets the post-retinal neurological processes that initiate the visual system’s creative rendition of the world as a sort of deictic emergence that deploys the strategies that are common to lyric poetry and according to critics such as Jonathan Culler, define the genre.
“A poem is a small machine made of words,” said William Carlos Williams. A poem, says Cecilia Vicuña, “is an animal, sinking its mouth in the spring.”
A cybernetic animal mediates these poetics: the same animal digitally populates advertising and films, is sampled in electronic music, emerges in the ‘biomimicry’ of design, gets designed in the biotech lab, and is encoded into our language as poetic structure.
If cybernetics offers a lingua franca for interdisciplinary theorists seeking to connect science with popular culture and literature, the animal becomes a cybernetic commonplace, especially as we near Cenozoic extinction.
I argue that poetry, read from the standpoint of a ‘cyborg’ ontology that is not predatory or given to reductive equivalence, illuminates the troubled boundaries between animal, machine and human. As instances of cybernetic ‘autopoesis,’ letters, syllables, words and phrases can emerge as animals.
Asked to ‘try her hand’ at an ‘Oriole,’ Emily Dickinson sent her correspondent a ‘Humming Bird’—one of her most difficult compositions. Even while their poetics privilege inorganic models, Marianne Moore, Lorine Niedecker, Francis Ponge, Ronald Johnson and Christopher Dewdney write animals into poems, restoring autotelic movements and intensities to language.
Like the digital crabs that swarm a commercial, or the insects droning at a rave, these animal rhythms are structured but not organized by the mechanics of inscription. Within the very fabric of their extinction, animals trace the affect that cannot be subsumed to human purpose. Only as machines, speaking or reading animals, are we moved to save them.
My paper explores the interplay between influences of cohesion and expansion in contemporary poems, factors that congeal them and that render them porous towards multiple interpretations. I include as third, pivotal factor the periodicity between these centrifugal and centripetal elements. In addition to their relation to language and signification, these rhythms often also reflect the biological process of breath.
The model of autopoiesis introduced by biologists Humberto Maturana and Francisco Varela distinguishes two complementary sets of processes that constitute the life processes: those through which an organism maintains its organizational integrity and those through which it exchanges with the environment, for instance to receive sustenance. In approaching a poem from the perspective of autopoiesis, I am interested in the balance of centrifugal and centripetal, or auto-referential elements and those porous to multiple interpretations. I take the word “germinal” from what contemporary poet Gustaf Sobin calls the “germinal circulation of letters” (“Testament” 14-15). As seed germination creates biological form, the interconnections between words and images create the forms we call poems. Their organization is dense, recursive, or magnetic enough that they coalesce as a poem. At the same time, their suggestiveness generates multiple resonances that seep, pulse, or explode outwards, inviting connection with multiple discourses and readers.
In some poems, the fluctuations between centrifugal and centripetal pulls reflect a tempo related to breathing. This biological periodicity provides an underlying shared context that can add to a poem’s compelling qualities.
This paper examines the ways in which medical and popular texts discursively fragmented and estranged bodies by reading them as electrical. In contrast to readings that insist that the body-electricity analogy familiarized electrical power to the American public, I contend that nineteenth-century constructions of bioelectricity ultimately dis-integrate the body into dynamic networks of localized, electrically-coded points.
Even doctors like George Miller Beard (1839-1883), who were skeptical of the electricity/body nexuses that became available in nineteenth-century urban America, probed and mapped patients’ bodies with electrical apparatuses in order to reveal their secret negative and positive encoding. Thus, in medical and popular literature, the abstracted body of the potential medical patient became coded as a site of invisible, often frightening power, that could only be decoded, and thereby managed, by the electrically-literate expert.
Spurred by John Hunter’s dissections of fish in pursuit of their “electric organs,” by Joseph Priestley’s invention of the “eudiometer” to test for the “goodness of the air”; and, of course, by Franklin’s work on electricity and gravitational pull, interest in “electric medicine” steadily expanded in late eighteenth-century America and Europe. From this came “animal magnetism,” the theory that blocked electrical flows within the body caused suffering and could be dispersed by a skilled magnetist popularized, and closely associated with, Franz Antoine Mesmer. In 1792, Wollstonecraft cited mesmerism as one of “instances of the folly which the ignorance of women generates” to which she devotes chapter 13 of her Vindication of the Rights of Woman. In a significant departure from the Vindication’s predominant reliance on reason as the measure of a genderless humanity and as the grounds for the critique of women’s socialization to dependence, cunning, and decorative beauty, Wollstonecraft denounces mesmerism not as irrational but as impious. “Do you then believe that these magnetizers, who, by hocus pocus tricks, pretend to work a miracle, are delegated by God, or assisted by the solver of all these kind of difficulties—the devil?,” she writes. Mesmerism, claiming to heal the unrepentant, is “little short of blasphemy.” Wollstonecraft’s faith-based approach to mesmerism, and its place in one of the key documents of Enlightenment feminism, suggests that mesmerism may have owed its popularity also to the ways it enabled the simultaneous valorization of faith and skepticism as valuably progressive. I argue here that Wollstonecraft’s description of mesmerism as a blasphemous pursuit of unearned health pushes us to look more closely at the normatively progressive and secular time that undergirds her pervasive deprecation in the Vindication of women, Catholics, kings, and even animals as culpably, unnaturally “childish” in their credulity. Relatedly, close attention to the specific shape of Wollstonecraft’s denunciation of mesmerism suggests necessary revisitation of prominent recent studies of mesmerism, both from the fields of the history of science and from literary studies, that note the ways that mesmerism inspired elite fear for its associations with the French Revolution; the common people; anti-professionalism; and democracy. Are believers in mesmerism unduly, or inadequately credulous? Is belief in science necessarily opposed to belief in God, and are the utopian temporalities of each simply linear? Wollstonecraft’s take on mesmerism helps us approach these questions and then, more tentatively, to complicate recent histories of skepticism by Richard Dawkins, Sam Harris, and Jennifer Hecht.
Edith Wharton has long been celebrated as an anatomist of the upper-class social worlds she depicts in her fiction, with their elaborate sets of codes, rituals, hierarchies, and kinship systems. The sense of a recondite, constricting, even baffling, symbolic order composed entirely of code is memorably driven home in The Age of Innocence, where “Society” is termed “a kind of hieroglyphic world” made up of “a set of arbitrary signs.” By contrast, little attention has been paid to Wharton’s evident fascination with modern technologies of communication: born into a telegraphic world, she lived to hear Hitler’s speeches on the radio, and to see the BBC establish its first television network (she died only a few months after Turing’s epoch-making paper laid the groundwork for the modern digital computer). Wharton’s lifetime thus spanned a heroic age of technological innovation and penetration into social life, and her novels and stories constitute, among other things, nuanced explorations of the place of the new electric media in the modern world. I propose to discuss the relationship between social and media networks in Wharton’s fiction, focusing particularly upon the trope of coding in The House of Mirth, The Custom of the Country, and The Age of Innocence. I will consider, for example, the affiliation in her work between society and the electric telegraph (itself associated with arbitrary codes) which helps to maintain its order, as well as the quixotic desire on the part of her protagonists for a medium putatively unmediated by codes.