NEW SPLASHINGS IN THE OLD POND: THE COHESIBILITY OF HUMANITIES COMPUTING

Abstract

The question of academic legitimacy for humanities computing is argued from an heuristic, experimental perspective. The basic approach is to privilege what happens in actual practice, to observe the commonalities and generalize from them to a picture of the interrelations between the field and the non-technical disciplines of the humanities. These generalizations also lead to discovery of intellectual kinships with several traditional fields of study: philosophy and history (including the history and philosophy of science), ethnography, sociology, literary criticism and so on. The essay argues that in these kinships are to be found the basic questions with which we may build a computing that is of as well as in the humanities.

The old pond;
A frog jumbs in –
The sound of water.
(Bashô)

According to an old story, recounted disapprovingly by R. H. Blyth in his discussion of this famous haiku, the second and third lines came first – Bashô, hearing the sound in his garden, responded thus to his teacher's question about the origins of Buddhist law.[1] The 17-syllable form demanded completion, so Bashô added »The old pond«. Thus a Zen student's clever response became a poem of inexhaustible depth.

And an analogy with which I begin. I am suggesting that our fundamental response to the question of our legitimacy, which animates this issue of Computerphilologie, is to point to the manifestly vigorous activity that we call ›humanities computing‹ – the scholarly splashings about which we study – and to identify it with the origins of our Lebensform. This activity is, undeniably, a primary fact. What, then, do we make of it?

This question of legitimacy is curious. Why, after all these many years of such manifest work in so many disciplines, is it still being asked? Is it the right question to be asking? In academic and scholarly terms legitimacy means disciplinarity, but ›discipline‹ has two rather different senses we need to disentangle.

Institutionally what we mean by the word is of course bound up with the academic structure of departments, institutes, centres and the like. Thus academic legitimacy means in essence justifiable as a core activity of the institution, on a par with history, philology, philosophy, computer science and the like. Especially now, given the pressures on academic institutions to keep up with the times and simultaneously to do more with less, we need not wonder why the question is being asked. How, then, do we justify the allocation of scarce academic appointments and resources to humanities computing?

Clearly the argument at this level needs to be in institutional terms, preferably backed by institutional strategies. I give these no more attention here, since they do not survive translation across local boundaries. Even so, I observe that a receptive audience for this argument is most likely to be found at senior levels of administration. A senior administrator has the opportunity to observe different forms of research, thus to see commonalities among them, as we can. He or she is in a position to appreciate an argument for an extra-departmental, interdisciplinary humanities computing, indeed that such an arrangement offers coherence, synergy and efficiency in the deployment of human resources. He or she has the power to act and may be looking for opportunities to improve the working conditions and reputation of the academic staff. A properly institutionalized humanities computing offers such an opportunity, which in the hands of someone with administrative imagination – alas, a rare gift – can be a scholarly and professional cornucopia.

In making the argument, citing precedents, notably those within the same or an admired academic culture, may help. It will also help to understand, and so be able to defuse, common motivations for denying the field academic status. Technophobia is perhaps less of a problem than it once was, but the technological determinism at its root – which it shares with its equally problematic and still vigorous opposite, technophilia – remains dangerous. I have already referred to the myopia of disciplinarity, from which comes the quasi-Marxist argument that humanities computing will wither away into the practice of the established disciplines. We know this to be wrongheaded, but the reasons still need to be articulated: few if any active scholars outside humanities computing are able to keep up with the pace of technical and imaginative changes in our field, and such scholars, in no position to observe relevant activities across the disciplines, will miss profoundly beneficial opportunities. There are of course other problems – perhaps all the sins are involved. Envy and sloth, for example. Be that as it may, these are only cautionary remarks.

We notoriously ask, »Is humanities computing a discipline?« It might be more honest to ask, »Should humanities computing constitute a department at this institution?«, since many if not most existing departments of learning would hardly qualify under any single, rigorous definition of ›discipline‹. But for an academic institution worth the name, the question must ultimately get down to the scholarly sense of the word. However pragmatic the case, it must have a solid intellectual foundation.

I turn to the Oxford English Dictionary, which notes that

Etymologically, ›discipline‹, as pertaining to the disciple or scholar, is antithetical to ›doctrine‹, the property of the doctor or teacher; hence, in the history of the words, ›doctrine‹ is more concerned with abstract theory, and ›discipline‹ with practice or exercise.

An etymology is of course no argument, but we get much closer to one by paying attention to the old sense of discipline as practice – again, as what happens. The philosopher and historian of science Ian Hacking has shown that theory-dominated philosophy deprives experimental practice of its native integrity, reducing it to a subordinate, purely supportive role – and that contrariwise »experiment has a life of its own«.[2] Similarly, to ask of any disciplinary practice that it justifies itself on theoretical grounds – particularly when the theory (of disciplinarity) remains unstated and perhaps unstatable – is to reduce that practice to a poor thing, at best the servant of one or more established interests. If, as Jonathan Culler argues, the nervous quest for theoretical coherence is profoundly mistaken in English studies,[3] misconstruing its vigorous disagreements as a sign of morbidity, how much more so in a practical field such as ours.

Let us rather take up the historical question of what happens when scholars across the disciplines of the humanities apply computing to the artifacts of study. Let us describe what we see, then try to make sense of it. The following is a brief sketch that also serves as an expanded commentary on Mapping the Field, an ongoing project to think visually about its developing state.[4] The diagram from that project is included here as Figure 1.

Immediately we are pulled back from the individual departments of learning (in truth, a liberation) to survey the uses to which our common instrument is put. As I suggested earlier, we are thus in a position, very much unlike the conventional scholar, to see the commonalities, both actual and potential. What are these? They are not artifacts, questions or theoretical approaches. Rather they are shared technological methods, for example concording, which may usefully be applied to any discursive text in any research context where the how of language is at issue. It is relevant to linguistics, literature, history, religious studies, musicology, a number of the social sciences and so on. To use a very old-fashioned term, we see scholarly activity as data-processing. We see that computational methods vary not by subject but mostly by data-type: discursive (›running‹) text; tabular (›chunky‹) alphanumeric data; images; and sound, both digitized aural data and transcribed music.

But such commonalities and the methodological commons they define are only the raw material for an academic field. Having sketched out the commons and its interactions with the several departments of learning, we must go on to ask several further questions. An agenda for scholarly research in humanities computing (which I will not propose here) arises out of these.

What kind of activity are we observing overall, across the data-types and areas of application? The fact that the computer is essentially a modelling machine,[5] facilitating endless play with representations of knowledge, suggests that exploration and experiment provide a better answer than we tend to get from computer science, which offers ›knowledge representation‹, or KR, as it is known in the trade. (KR is a subfield of artificial intelligence that in its strong form claims eventually to produce a complete representation of all human knowledge in computationally tractable form.[6] In a sense scholars have always worked with representations or models, that is constructs that shape knowledge of a subject in a particular way, become the basis for further research, are challenged, modified and eventually abandoned; they may be called ›schools‹ or ›theories‹, but the result is the same. But modelling per se, in the present-participial sense, was not a recognizable activity in the humanities until the computer made it trivially easy to alter a given model, hence powerfully to model. This power inflects representation with impermanence and thus shifts attention away from the notion of a settled proxy to the idea of an heuristic process.

Whether we think of ourselves as building models or making representations, the medium requires computational tractability, which means absolute consistency and total explicitness. These two brutal imperatives throw into stark relief all non-algorithmic knowledge – or whatever it is that we call the computationally unsayable. Since in the humanities we privilege the uniqueness of the cultural artifact over generalizing theory, the fundamental point of the entire practice is to raise the question of how we know what we know (rather than that we know it, although this also happens).

Thus, although we may treat ›model‹ and ›representation‹ as more or less synonymous, we are better off thinking of this practice as modelling because the present-participial force of the word is closer to what computers are good at. It reminds us simultaneously both of the impermanence and the fictional nature of what we do. The term has the additional advantage of alerting us to a wealth of quite helpful literature in the history and philosophy of science, indeed to a strong kinship with science studies, but more about that shortly.

Modelling occurs as a first-order activity when we construct a manipulable representation from the data and raises the epistemological question then. Thus, for example, designing a database will characteristically force certain interpretative decisions and so highlight them as possible research questions. Modelling also occurs as a second-order activity when a given representation is accepted for use; manipulations of it result in various models of the modelled data. Both first- and second-order modelling may be dictated by discipline-specific research problems, but not all. Some modelling in the humanities, as in the sciences, is open-ended, experimental. How do we understand what goes on then?

Jerome McGann, famously borrowing a phrase from Lisa Samuels, has described the experimental side of modelling, with marvellous precision, as »imagining what we don't know«.[7] Note well: imagining, not discovering. The metaphor of discovery, Ian Hacking has argued, is urged upon us by a theory-dominated perspective, hence the proposition that we merely uncover that which has been objectively there, »part of God's handiwork« since the beginning.[8] In contrast the perspective of experiment leads us to talk about knowledge-making instruments and their role in realizing hypothetical entities by helping us manipulate them.[9] In other words, what the literary critic, taking the imagination very seriously, might call the imaginative function of these instruments.

Here it is perhaps a good idea for me to insist that I am not eliding humanities computing into experimental science. In the Anglophone tradition in which I am writing, there is no neutral idea of a Wissenschaft that the sciences and humanities share. We need to be very cautious about attributing »scientific« qualities to any humanistic study because of the distractingly honourific sense of the English word, which John Searle points to.[10] Rather I mention experimental science (and other fields) in order to suggest by example that our involvement in computing across the humanities naturally leads to fundamental disciplinary kinships, from which we have much to learn, whose questions we need to make our questions. These kinships, I will suggest further, are the way we will come to establish a computing that is of as well as in (though not subsumed by) the humanities.

McGann's use of the imaginative faculty opens further doors, indicating other, equally essential kinships. If what we don't know becomes ours by imagining the future, then what we once knew but know no longer, or not quite in the same way, we recover by imagining the past. I have argued elsewhere at length that in order properly to refurbish our intellectual forms, such as the lexicon or commentary, we need to imagine them as they were known at the time they were made – before for example every reference unavoidably called hypertext to mind.[11] The long historiographical debate beginning with von Ranke and Droysen and continuing through Collingwood and Finley is thus directly relevant to us.[12] We need it, as Michael Mahoney suggests, to write a history of computing, that is to understand the computer as an object of as well as in humanistic study.[13]

The present (that which is now and here but different) is imagined much like the past but with living realities: ethnographically, on Greg Dening's »beaches of the mind«, in hearing what is not said, because, to the other, it goes without saying.[14] This is not (safely) exotic – »Foreignness does not start at the water's edge but at the skin's.«[15] Nor is it primarily a matter of textbook technique but of ›imagining what we don't know‹ that others do. Mahoney points out that to read our computational artifacts means learning to think as the makers did or do, with things rather than with words. Computational craft-work, potentially as much a kind of humanities scholarship as essays and books, stands to be passed over unless we can develop this kind of imagination. (Both intellectually and administratively we cannot afford the loss of such scholarly output.) Humanist scholars who become designers of artifacts, as increasingly will happen, need this kind of imagination too. Important new areas, such as visualization, send us to art historians and cognitive psychologists to learn about »visual thinking«, as Rudolf Arnheim has called it.[16] Eugene Ferguson has pointed out that engineers from the Renaissance into the 19th century were trained as artists;[17] perhaps we should look into such training. Indeed, how can we grasp, for example, the importance of the KWIC concordance without the visual imagination of things?

A practice that so prominently engages with tools in order to pry into other things also needs to account for what happens when we use them. The most helpful work along such lines comes from philosophy: Michael Polanyi's doctrine of »tacit knowledge« and Martin Heidegger's Geworfenheit (thrownness), with the related ideas of das Zuhandene (the ready-to-hand) and das Vorhandene (the present-at-hand).[18] Polanyi argues that we attend from our tools in order that we may attend to the objects on which we work. Hence the knowledge we have of the things we study is always partially tacit, never entirely objective. Indeed, as Teodor Shanin has argued, models are like this, a hybrid mixture of subject and object;[19] they are, in other words, embodied ideas. Yet modelling by nature means also detaching from the tool ready-to-hand, so that it becomes present-at-hand and can be worked on. In Heidegger this occurs in a Zusammenbruch (breakdown), when something goes wrong – when, we might say, the model fails and our attention shifts back to it. Terry Winograd and Fernando Flores have sketched out such a Heideggerian scheme to explain how humans and computers interact.[20] The metaphor of prosthetics is perhaps less useful to us, as it is teleological rather than experimental and would appear to assume a more or less permanent incorporation.[21]

In any event, here are several pieces of the puzzle before us. Help may be expected from the philosophy of imagination – a newly rehabilitated topic,[22] about which, as I suggested earlier, literary criticism and the literature it studies have much to say.

An argument for an academic humanities computing must of course deal with its infrastructural strangeness: to be involved in all the disciplines, as I argue we must, serving them collegially while simultaneously being our own masters, requires a new way of thinking about interdepartmental relations. Hence we are directed to the scholarship of historians and social scientists, especially concerning problems of collaborative work. Among the most useful of examples is the historian Peter Galison's study Image and Logic: A Material Culture of Microphysics.[23] He proposes the anthropological/linguistic metaphor of the ›trading zone‹ to explain how specialists from the mutually incomprehending fields of physics, chemistry, mathematics, engineering et cetera have collaborated, for example, on the Manhattan Project. What happened, he proposes, is that they invented proto-languages or ›pidgins‹ in which specialists could assign local, sharable meaning to intellectual objects and so agree about them while simultaneously preserving the global meanings they had in their cultures of origin. I extend Galison's metaphor by introducing into the scenario a merchant-trader (or, more precisely, anthropologist-cum-merchant-trader) who moves among the disparate cultures, seeing opportunities, fostering trade – and meanwhile studying the natives for his or her own purposes. I propose that we think of ourselves as that merchant-trader.

The list of kinships, all indispensable to our disciplinary self-awareness, goes on, but I will stop here. Some kinships doubtless remain to be discovered. The rough map provided as Figure 1 suggests others but is obviously also incomplete; it is already in need of numerous revisions. Comments are welcome – indeed, they are the point. In this as in so many other things, our commitment is not to right answers or correct maps but to the communal, conversational process, which has no final product but with care and hard work may map out a reliable trajectory. The map as provided already helps with two of our goals, however: to provoke thought toward the design of a PhD in humanities computing, so that the next generation of computing humanists is better served; and to give at a glance answers to most of the questions to which this essay responds.

As I have implied, however, neither its omissions nor those of the essay thus far address the question of what should not be part of humanities computing. There are certainly activities of the juvenile period we need to put or give away, chiefly ›desktop support‹ and other survivals of the computing centre help-desk. These are not a serious worry, however. They become obviously not what we are about the more vigorously we pursue the expanding intellectual horizon. At the same time, I do regard the impulse to create a separate departmentalized discipline (which we might cacaphonously name ›humanistic informatics‹), with its flight to some theoretical high-ground, as profound error. In their uses of computing, the disciplines of the humanities furnish us with unending opportunities for intellectual field-work as well as mind-expanding collaboration, and the good work we do there, in collegial service, yields invaluable friendships.

Let us put our minds to the cohesibility of our uncertainly bounded field. Let us identify the old questions to which it gives new room and search out our intellectual kinships wherever they may be.

Willard McCarty (London)

Porf. Dr. Willard McCarty
Centre for Computing in the Humanities
King's College London
Strand
London WC2R 2LS
U.K.
willard.mccarty@kcl.ac.uk


(16. Oktober 2002)
[1] Reginald Horace Blyth: Haiku. Tokyo: Hokuseido 1949, vol. 1, pp. 277-279.
[2] Ian Hacking: Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge: Cambridge University Press 1983, p. 150.
[3] Jonathan Culler: Framing the Sign: Criticism and its Institutions. Oxford: Basil Blackwell 1988, pp. 41-56.
[4] Willard McCarty/Harold Short: Mapping the Field. See <http://www.kcl.ac.uk/humanities/cch/allc/reports/map/mapping.html> (16.8.02).
[5] James H. Fetzer: The Role of Models in Computer Science. In: The Monist 82,
pp. 20-36.
[6] John F. Sowa: Knowledge Representation: Logical, Philosophical, and Computational Foundations. Pacific Grove CA: Brooks, Cole 2000.
[7] Jerome McGann: Radiant Textuality: Literature after the World Wide Web. New York: Palgrave 2001, pp. 105-160.
[8] Ian Hacking: Representing and Intervening, p. 225f. (footnote 2).
[9] Ibid., Part B.
[10] John Searle: Minds, Brains and Science. 1984 Reith Lectures. London: Penguin 1984, p. 11.
[11] Willard McCarty: A Network with a Thousand Entrances: Commentary in an Electronic Age? In: Roy K. Gibson/Christina Shuttleworth Kraus (Eds.): The Classical Commentary: Histories, Practices, Theory. Leiden: E. J. Brill 2002, pp. 359-402.
[12] For von Ranke see Anthony Grafton: The Footnote. A Curious History. London: Faber & Faber 1997, chap. 2-3. Otherwise: Günter Birtsch/Jörn Rüssen (Ed.): Johann Gustav Droysen. Texte zur Geschichtstheorie. Göttingen: Vandenhoeck & Ruprecht 1972; Jan van der Dussen (Ed.): R. G. Collingwood. The Idea of History. Rev. Ed. Oxford: Oxford University Press 1993; M. I. Finley: The Use and Abuse of History. London: Pimlico 2000.
[13] Michael S. Mahoney: Issues in the History of Computing. In: Thomas J. Bergin/Rick G. Gibson (Eds.): History of Programming Languages II. New York: ACM Press 1996, pp. 772-781; see also <http://www.princeton.edu/~mike/computing.html>. (16.8.02)
[14] Greg Dening: Readings/Writings. Melbourne: Melbourne University Press 1998, p. 85ff. and passim.
[15] Clifford Geertz: Available Light. Anthropological Reflections on Philosophical Topics. Princeton: Princeton University Press 2000, p. 76 and passim.
[16] Rudolf Arnheim: Visual Thinking. Berkeley: University of California Press 1969.
[17] Eugene S. Ferguson: The Mind's Eye. Nonverbal Thought in Technology. In: Science 197 (26 August 1977), pp. 827-836.
[18] Michael Polanyi: The Tacit Dimension [1966]. Repr. Gloucester, MA: Peter Smith 1983. Martin Heidegger: Sein und Zeit. 18. Aufl. Tübingen: Max Niemeyer 2001.
[19] Teodor Shanin: Models and Thought. In: T. S. (Ed.): The Rules of the Game. Cross-disciplinary Essays on Models in Scholarly Thought. London: Tavistock 1972,
pp. 1-22.
[20] Terry Winograd/Fernando Flores: Understanding Computers and Cognition. A New Foundation for Design. Boston: Addison-Wesley 1986.
[21] See, for example, the argument of evolution through tool-use in Merlin Donald: Origins of the Modern Mind. Three Stages in the Evolution of Culture and Cognition. Cambridge, MA: Harvard University Press 1991.
[22] Nigel J. T. Thomas: Imagination. In: Chris Eliasmith (Ed.): Dictionary of Philosophy of Mind. 2001. <http://www.artsci.wustl.edu/~philos/MindDict/imagination.html>. (16.8.02) See also his forthcoming article Mental Imagery, Philosophical Issues About. In: The Encyclopedia of Cognitive Science. London: Macmillan 2002. See also the site <http://www.calstatela.edu/faculty/nthomas/mipia.htm>. (16.8.02)
[23] Peter Galison: Image and Logic. A Material Culture of Microphysics. Chicago: University of Chicago Press 1997.