There are four elements to successful writing instruction using computers: the equipment, the software, the pedagogy, and the teacher. These four form a chain with four possible weak links. If the computers will not handle the proper software, if the software will not support the proper pedagogy, if the pedagogy demands something the computer cannot deliver or resists the powerful things the computer can deliver, or if the teacher does not have his or her heart in it all, then the whole effort collapses. Yet of these four elements, the one that seems most at the center of good computer-based writing instruction is the software. Here the choices the teacher makes are most clear-cut and can prove most disastrous.
The reason for giving software such importance is that of the
four elements of successful computer-based writing instruction,
software is at once the most ideological and the most unforgiving.
The equipment itself, the computers, is largely evolving toward
similar capabilities--windowing, mice, networks--no matter what
the manufacturer or model. Teachers and pedagogy, on the other
hand, can adjust in midstream, responding quickly to failed situations
or sudden new possibilities. Computer software, however, no matter
what flexibility it may claim or what ability to accept "user
definition" or modifying parameters, can never escape the
instructional attitudes and even the ideology of its programmers
The range of attitudes and ideology demonstrated by current instructional software is considerable and not at all obvious. Drill and practice software, for instance, which accepts only narrowly defined responses, is sending the message that language can be assumed to be formulaic and binary, right or wrong, and that any student's idiolect which allows syntactic and pragmatic choices that the program won't accept is dysfunctional. To be sure, many teachers present such a binary attitude all the time, but students understand all human judgments to be, to some degree, opinion and cushion themselves against any particular "teacher's pet" rights and wrongs. The computer, however, especially when it rejects specific input, projects the universal ethos of science itself. Many, if not most, students do not understand that computers themselves are functions of opinion and that software is as erratic, biased, and myopic as many human beings and much less able to revise its own pronouncements according to contextual considerations.
But an ideology can be implicit in software in more subtle ways than simply in projecting a simple-minded binary right or wrong attitude toward discourse. A new form of collaborative instruction, based the network capabilities of microcomputers, allows students to send their writing to classmates in a variety of ways which in turn allows a variety of peer responses to a text. Software written early in the development of this pedagogy (about 1985), following what might be considered the "language lab" model, often provided one of the workstations in the classroom a greater technological capability than the others. The person seated at this workstation, presumably the instructor, could monitor at will whatever was happening on any other monitor, and could actually put text onto the screen of another monitor, or even turn off or freeze the action on selected computers. The analogy here was, of course, that the instructor should operate as an editor on-the-fly and be able to view and correct any student's text during the composing act.
Few people saw anything horrific in the control that this arrangement gave the teacher; in fact, many saw the arrangement as compensating to some degree for the reduced control that an instructor normally experiences in a computer-based classroom. If the students were going to be "doing their own thing" at their monitors, then how else could the teacher ensure that the proper kind and amount of effort went into the writing, if not by electronically peering over the student's shoulder? Some people, however, perceived a contradiction in the idea of imposing such an electronic authority over a presumably networked, collaborative situation. A teacher who sits at a console and can, with a few keystrokes, review and even change the work of any student in any part of the classroom is far more intrusive and intimidating than that same teacher who wanders about a traditional classroom trying to peer at a student's work over a hunched shoulder and a hand spread wide for privacy. Later software written to support peer responses over a network made sure that no workstation in the room had any special authority over any other workstation.
Even software that apparently contains none of the dictatorial aspects of the two examples above can nevertheless be stifling in its operating assumptions. A form of instructional software developed in the early 1980s to counter the widely recognized debilitating effects of closed-ended drill and practice software was "invention heuristics" software. Based on work done by Hugh Burns (1979), this software provided students a series of prompts or questions regarding the studnet's topic. The student would respond to each question, and at the end of a session, the student would have a fairly large amount of text addressing aspects of the topic, much more text than the student would probably have produced without the incremental prompting. The prompts themselves could reflect a particular heuristic scheme or view of reality (i.e., Aristotle's topoi or Kenneth Burke's dramatistic pentad) or simply an attempt to elicit ideas metaphorically ("imagine your topic is a machine; describe its design and function"). There was nothing particularly clever about the computer-based interactive questionnaire, but computers provided a mechanism that supported it in ways that pen and paper could not, although a number of similar pen-and-paper methods have appeared over the years ("cubing," etc.).
The problem was, again, that the computer tended to imbue whatever it presented with a scientific finality. Many students who probably would have accepted invention schemes based upon the tagmemic matrix of Richard Young, Alton Becker, and Kenneth Pike or the warrants and claims of Stephen Toulmin felt an uncomfortable element of compulsion when responding to computer versions of the same heuristic. Instructors who responded positively to the cleverness or efficiency of a particular software action that they themselves had chosen to use often didn't recognize that those students who they required to use the action reacted strongly to the computer itself, giving the computer a validity of presentation (and hence an authority and implicit threat) that the instructor couldn't fathom.
The problem demonstrated in these examples is the major problem with instructional software itself: All too often, the instructor who selects and uses particular software is responding eagerly to the actions the software promises, but only as the instructor would imagine those actions occurring in a noncomputer environment. Although computers often add interesting levels of action that are simply unimaginable before being experienced, they also often add an unanticipated validity to processes (and implied values) that would be, from the point of the programmer, simply choices made from competing alternatives.
Seen this way, instructional software is not merely a set of instructions
that drive computers (and sometimes users) but a subtle platform
of belief which can, by manipulating a student's operating environment,
carry implicit messages regarding behavior or viewpoints that
remain invisible even to writing professionals trained to recognize
ideology in written texts. This is not to say, of course, that
software is inherently a bad thing or that computers are inherently
manipulative in a negative sense, although some make such claims.
It is in the nature of humanists to harbor deep suspicions regarding
technology, and especially regarding a technology that has often
made tenuous and even silly claims to reproducing and improving
human intelligence. But if all aspects of social existence display
various rhetorics--conscious and unconscious structures of belief
and influence--then all is equally ideological. Our task is not
to seek ideologically neutral texts or teachers or mechanisms,
or simply reject out-of-hand texts, teachers, and mechanisms but
rather to recognize the beliefs and perspectives all must contain.
If necessary, we must also balance those beliefs and perspectives
against others clearly perceived and rationally defined.
Those who consider the future of education often worry that the humanizing influence of teachers themselves will disappear beneath an irresistible juggernaut of systems and technologies and machinery. Such a future is a real possibility, even a likely one, when those who fear it, those who best understand the dangers, do everything they can to fight change. But industrialization and electronic and communication technologies have proven too compelling to society at large for there to be any chance of turning back the clock. Nor can teachers in the humanities even caution to "go slow," for the pace of change will be determined by a complexity of forces which simply overwhelm those few voices. Better than to resist or retard change would be to understand change, to learn to understand the technologies that are either already in our classrooms or at least at the door, and to ensure that such technologies are selected and implemented according to our informed beliefs and values. The key to our using the computer in writing instruction without losing our identity as writing instructors lies in the software we choose.
Word processing remains the principal use for computers in English
departments, and yet a room full of computers can do much more
for instructors and students than simply provide high-powered
writing tools. The problem is that effective computer-based pedagogies
often require a considerable shift in thinking from traditional
instruction, a shift so great that most instructors cannot, intuitively,
imagine useful instructional purposes for computers aside from
word processing, or the uses they can imagine and attempt to implement
fail spectacularly because the computer is being asked to do things
it doesn't do well. But to use microcomputers only for word processing
wastes money and equipment as well as instructional possibilities.
The Task-Analysis Fallacy
One of the major traps for those who might be examining the use of computers as pedagogical tools can be seen in what I call the "task-analysis fallacy," or the tendency of just about everybody in the early days of instructional computing to assume that the best computers could do was to take what had always been done and do it better, faster, and cheaper. In other words, the early developers would analyze work processes or instructional tasks and write software that would facilitate those processes or tasks. Hence, writing instruction ended up with a lot of crude software in the early 1980s that tried to grade papers, mark errors, tutor students, test deficiencies, or simulate/model expert behaviors.
The task-analysis fallacy arose out of two mistaken ideas about
computer capabilities, one too optimistic and the other too pessimistic.
The first was the idea that computers could duplicate human activity
and could replace much of what human instructors were doing, especially
in regard to evaluating writing. This might be called an "artificial
intelligence" use of the computer for instructional purposes,
and ties in with computer-based style-checking and self-paced
tutoring. The second mistaken idea was that computers could only
replace the agents of current processes, that computers themselves
constituted no possibility of real change or major challenge to
the processes themselves or to an understanding of what writing
instruction is and should be. Actually, the more one uses computers
for instruction, the more one understands one's own precomputer
methods of instruction. The best aspect about trying to use computers
in significant ways in the classroom is that it forces the instructor
to reconsider what she or he has been doing all along. It makes
the familiar strange, and when teachers look at their last 5 or
10 years of teaching from the perspective of a stranger, they
can see the cracks and fissures and youthful assumptions that
have become inherent and invisible over the years. In this way,
the mere attempt to use computers imaginatively can open up the
question of what writing instruction itself is and what heretofore
invisible alternatives the writing instructor has.
The Promise and Reality of Artificial Intelligence
Yet, in the early 1980s, the attraction of artificial intelligence tended to overwhelm other possibilities of computer use. The computer was presumed to function as a semi-intelligent assistant to the beleaguered teacher, hence the early, ubiquitous term computer-assisted instruction, or CAI. That the computer might assume some of the "low-level" burdens of the writing instructor--objective testing, drill, grading, and remedial tutoring--proved to be both easily understood by instructors and devilishly seductive. The artificial intelligence use of the computer ultimately failed, however, and severely disillusioned a number of experimentally minded teachers along the way.
The problem lay in the computer's inability to employ natural language. Computers can process text only in the most superficial of senses; computers cannot grasp the meaning in text and therefore are helpless in evaluating the rhetorical elements that modern composition studies feel to be the most important in producing effective writing: audience, purpose, tone, and context. Even in evaluating what was once presumed to be the most rule-bound element of writing, grammar and usage, computers have shown themselves as limited. But it is still hard for those who see language syntax as analogous to mathematics to realize that correctness is not so much a function of rule as it is a function of writer-reader context and all the complex ambiguities attendant on context. Style checkers, or programs that presumably catch grammar and usage errors in writing, don't really catch errors. They hold up yellow flags whenever the crude, rule-based trapping mechanisms of the software are triggered by the occurrence of one or two conditions that the programmer has determined might indicate an error. For some such programs, at certain prescribed levels of diligence, the word affect will always trigger a warning, for affect is eternally confused with effect in writing and the odds are that a fair proportion of uses of affect will prove erroneous.
For many writers, the warning that affect could possibly be an improper substitution for effect is useful. But for the vast majority of student writers, who haven't the slightest idea of how to mentally test the propriety of affect or effect, the warning is not a yellow flag, but a flashing red beacon and a claxon blaring out the ancient truth that writing itself is just a minefield with 99% chance of error, one of those revenges on youth that adults seem to delight in. The computer, instead of being the liberator that professional writers have universally discovered it to be, has become one more tyrant in the classroom, just waiting for somebody to slip. When style checkers do succeed, they succeed in spite of themselves, usually at the hands of strong teachers with long-time computer experience and a realistic understanding of how computer software can flag an error. Far more often, inexperienced teachers or those with unrealistic ideas of what computers can do (unrealistically optimistic and pessimistic), botch the use of style checkers, experience confusion in the classroom, and--as always --blame the computer itself.
Another widespread use of the presumed artificial intelligence capability of the computer was that of self-paced tutoring. In the huge majority of cases, self-paced tutoring software represents nothing more than text flashed in front of a user, usually with a sort of "hypertext" process that allows the user to jump to this or that explanation, quiz, or tutoring process. The tutoring process is most often just an explanation (i.e., " 'to affect' means to influence; 'to effect' means to cause as a result, with the following exceptions. . ."), followed by a test question of some kind. If the student chooses properly, then the program goes on to another explanation and mini-test. If the student chooses improperly, then the program branches back into a repeat of a previous explanation, or better still, a previous explanation reworded so as to appear fresh. The advantage of this sort of process over a traditional workbook is obvious: Students aren't moved along a linear track of ingestion. There is, presumably, some ability to match progress through the material to what the student can successfully regurgitate. More specifically, the program can test assimilation at small increments, matching the pace of the presentation to the demonstrated pace of ingestion.
Computerized self-paced tutoring has been successful in other areas of instruction, notably in those that stress memorization of facts or rules or those that employ computer simulations of physical events. But the facts of language use have been shown, through years of linguistic study, to be not all that factual, and the complexities of language use belie any attempts to simulate real discourse on a machine which has no personal history and cannot interpret contextual elements with anything near the sophistication necessary.
Few people today challenge the theoretical use of the computer as a teaching or tutoring assistant, for the robotic possibilities of computers remains too strong in the popular imagination, and conversely, the notion of the computer completely transforming writing instruction into an entirely unrecognizable form seems too threatening. Much of the complaint that has accrued over computer-assisted instruction and the use of computers as grading or tutoring devices ("too mechanical," "inaccurate," "cold-blooded") has little to do with the abilities or inabilities of computers themselves, but has much to do with the limited assumptions and expectations regarding writing instruction which generates software that seems almost bound to disappoint.
If the predicted gains in natural language capability on computers
had come about, the artificial intelligence use of computers as
surrogate teachers and tutors might have moved beyond the crude
software of the early 1980s into the often envisioned teaching
machine that would do all the dirty work for the overburdened
teacher. Despite more than 30 years of effort, artificial intelligence
has foundered on the problem of implanting the kinds of experience
into computers which provides a linguistic context capable of
handling the extraordinary contextualism of natural language.
All was not lost, however, for those who supported the use of
computers in writing instruction. By the mid 1980s, several important
technical advances, such as the rapid increase in computing power,
more-powerful programming languages, and cheap local area networks
(LANs), were allowing computers to do more tasks more imaginatively
and were encouraging instructors to look at computers for writing
instruction as something more than severely limited human beings.
The Cognitivist Approach and Personal Writing Environments
Some software developers considered the success of word processing and concluded that if software could be written that successfully supported the writing act, then software could be written to support the act of thinking itself. In general, such software attempted to define the principal components of good thinking and then create software actions which would externalize in computer-constructed environments some of its more burdensome components. An analogy would be the abacus or the mechanical or electronic calculator. Another way of thinking about such individual writing environment software is that it tried to construct on the screen an intermediate step between the ineffable workings of the brain and their highly structured manifestation in writing.
The simplest kind of individual writing environment software comprised programs that supported various forms of outlining. This software included mechanisms which made listing and then categorizing and then adjusting elements of a topic or proposition relatively easy. The user simply followed some scheme of entry which didn't require any prior organizing and then let the computer assist in adjusting and expanding the elements. By allowing the user to work incrementally and then minimizing the pain of eventual assembly, organization, and expansion, individual writing environment software relieved the writer of much of the toil of manipulating ideas mentally or manually with note cards and clumsy outlines. Other software, mentioned above as invention heuristics software or the interactive questionnaire, used prompts or questions to direct the user's input. The prompts could be rather general, like the "who, what, where, when, and how" of journalism, or they could reflect the inquiry principles of sophisticated epistemological schemes such as Toulmin's forms of argumentation, Burke's dramatistic pentad, or the tagmemic matrix of Young, Becker, and Pike. Again, as in outlining software, large and complex ideas are accumulated from incremental entries. A third form of individual writing environment software sought to bring together all the elements of serious composition, including easily manipulable graphic representations of the "nodes" and "links" of how minds represent knowledge, means of accessing databanks and relevant sources, idea matrices, preliminary notes and drafts, even peer or teacher comments, all displayed in various windows on the screen for easy cross-reference.
The presumption of such individual writing environment programs is that while long-term memory is capable of storing all the elements necessary for most writing, short-term memory has a difficult time managing these elements in a way that allows easy cross-indexing and the most effective appropriation of relevant materials. If thinking is indeed dependent on categorization, then individual writing environment programs can give the writer useful sets of electronic pigeonholes and can provide an organizing principle for collating and rationalizing the material. The main problem with individual writing environment programs is that the more flexible the ability to categorize, the greater the difficulty in managing the software and the more frustrated the writing student (and instructor). The tighter and more directed the organizational principles of the program, and therefore the easier it is to use, the greater the danger that the student's mind is being forced in directions it does not want to go. The solution to the problem of too great or too narrow a flexibility would be to let students try a variety of programs and then let each choose the program that satisfies each person's cognitive disposition, but this is hardly practical. Programs that seek to externalize some of the functions required of short-term memory may in fact be forcing certain students into composing processes alien to them, no matter how natural and intuitively correct such processes may have seemed to the programmer and to the enthusiastic instructor who bought the software.
Many instructors who rejected early instructional support software
did so for precisely this reason, that most such software appeared
to systemize cognitive processes in ways which indeed privileged
one kind of composing at the expense of other kinds of composing.
Fortunately for those who, again, persisted in their enthusiasm
for computer-supported writing instruction, technology once more
rode to the rescue by providing an entirely new computer-based
classroom action, one which promised to satisfy the objections
of those who recognized the reductive nature of computer-based
drill and practice, style-checking, tutoring, or individual writing
Social Construction and Electronic Networks
By the mid 1980s, technical advances were allowing even inexpensive microcomputers to be networked to each other. Networks seemed, to those familiar with post-structuralist linguistic assumptions and the effects of community on discourse, to support the new instructional paradigm of collaborative instruction. An entirely new kind of software for writing classes emerged, software to help writers find readers over a network. This "computer-mediated communication" use of computers, in contrast to the artificial intelligence use, didn't attempt to imitate human capabilities, by pointing out errors or trying to tutor students, but acted instead as a means of moving information in text, over wires, to other people sitting either across the room or across the country. Such networked computers were "text telephones." Text not only could be sent over wires, but it could be stored in various ways, cut and pasted, formatted and reformulated, and--most importantly for writing instruction--pondered. The presumption behind using networks for writing instruction was that in order for instructors to promote writing as a rhetorically significant act, one employing writers and readers as co-constructors of knowledge, the instructor had to use writers and readers in a continuing concert of activity. Therefore the instructor must move written text quickly, cheaply, and manageably within the confines of the classroom.
Students using computer-mediated communication and "groupware" spend most of their time interacting with other students, usually in three distinct ways. First, students engage in communal brainstorming by informally exchanging ideas and issues using e-mail and electronic discussion. Following this invention or prewriting stage, they distribute drafts of documents to other individuals, groups, or to the entire class. Finally, they react, in text, to those drafts, and send the reactions or critiques back to the writers and sometimes on to others in the class in order not only to provide criticism for the writer but to model critiquing skills. The groupware (software which directs the "text traffic") manages the movement of the messages, comments, drafts, critiques, and other bits and pieces of shared text in such a highly social situation. What results from the instructional use of computer-mediated communication and groupware is not so much a "teaching" by anybody but more the creation of a socially oriented learning environment promoting less a passive reception of knowledge and more an active generation of skills. Groupware, to be sure, represents a quantum leap from software developed through the old task-analysis process, and it can create a frightening paradigm shift for the classroom teacher. The teacher does not present rules and guidelines for class assimilation, or spend much time testing that assimilation, but rather helps individual students solve problems that arise as a result of the insistent demands of writing and reading, give and take.
Such network pedagogy challenges the notion that writing is either strictly formal or strictly personal. "Formalists" argue that classroom time is better spent by the instructor's presenting and the students' learning the forms of language, and "expressivists" argue that network-based collaborative approaches are too group-oriented and deny the writer individual space for personal discovery and insight. Students themselves seem both frightened and intrigued by the prospect of having many readers and of reading the attempts of their peers, but seldom do they report at the end of a term the feeling that the intense sharing of text has somehow challenged their individuality or privacy. The resentments felt toward network-based pedagogies usually come from teachers who have had little experience with networks and groupware and who make assumptions about the nature of writing slanted toward the belletristic, the aesthetic, and the psychological.
Some instructors use networks to promote actual group writing, or single documents having multiple authorship. But far more people use networks simply as communal brainstorming tools and as means of publishing drafts throughout a classroom for peer feedback. The accusation that such classroom publication creates a groupthink implies that a similar practice among professional writers, distributing a work-in-progress to one's peers for prepublication reactions, also promotes groupthink or somehow stifles individuality. Classroom distribution of drafts over a network often stimulates a practical individuality by allowing the writer to have a solid knowledge of how the writer's thesis and supporting ideas and tone actually affect a number of readers. Supporters of network-based pedagogies claim that such first-hand experience of an audience promotes individuality, or a sense of creative dissonance between the writer and the reader by exposing the stereotypical presumptions of writers who have never been read and responded to by their peers.
Nevertheless, there remains among some of those who enthusiastically
support computer-based writing instruction the feeling that the
extensive sharing and critiquing of student texts over networks
subverts a characteristic that many feel to be absolutely basic
to writing itself: the necessary privacy of the writing act.
To these people, writing is not primarily social, nor is rhetorical
power developed best by experiencing reader feedback. The extreme
openness of student writing in a networked collaborative writing
class, they feel, tends to inhibit the natural expressive desires
of the solitary writer, a belief about writing that should be
recognizable to those steeped in belletristic notions of creativity
Hypertext: Navigating a New Literacy
For those who strongly support computer-based instruction but feel that networks may improperly challenge the necessary privacy of the writer, technology once again rides to the rescue. In the last five years, a rather spectacular category of software has captured the attention of those who primarily emphasize the mental writing environment of the individual writer, but software employing a design which seems less prescriptive in defining the writing process than previous computer-based thinking tools. This software supports hypertext, a term coined by Ted Nelson in the mid 1960s to refer to nonsequential writing, or writing that allows the reader to choose words or terms in the text and pass through those words or terms into expansions or digressions. In other words, a word or term displayed on a computer screen is linked electronically to a secondary piece of text; by selecting that word or term, the reader can immediately bring the secondary text to the screen. This process can be repeated over and over, allowing the reader to progress through the document not sequentially--one paragraph or page always leading to the next in line--but through a series of choices the reader makes by opening up what are in effect windows to further text. In sophisticated instances of hypertext, the path of words that the reader follows is determined by the words or key terms the reader decides to "open up," not the path the writer has dictated from first to last, left to right, top to bottom.
In effect, a hypertext document will probably contain many more words (or constituent screens or texts) than a reader will choose to read (or even realize exists), and the order in which facts or ideas are read can be idiosyncratic and presumably uncontrolled by the writer. The power of hypertext is that it seems to reflect the hyper-connective nature of thought itself and seems to avoid, to some degree, the straitjacket of linearity imposed by the conventions of script. Rather than blithely accepting linearity or sequence as the master principle of linking ideas, hypertext documents can provide their links with a much greater significance and make the nature of the links (or the way constituent texts can be navigated) a powerfully informing though nontextural element of the document. Thus, hypertext presents the possibility of modeling nonhierarchical internal processes in an external medium, and by so doing escapes the old suspicion that much of what we do in classroom instruction works against the way that people naturally learn and think.
Unfortunately, much of what has been done in the name of hypertext is in reality the old drill-and-practice and self-paced tutoring handled in a fancy new way. Hundreds of hypertext applications have been written that succumb to the task-analysis fallacy, pasting computer-based methods onto precomputer instructional suppositions. Hypertext does present the opportunity for revolutionary instruction, however, and a few hypertext developers are producing fascinating applications. Especially important, I think, are applications that use hypertext principles as brainstorming tools. Not only can ideas be listed or framed in hierarchies (i.e., the indented categories of traditional outlines), but ideas or concepts can be linked visually in complex and overlapping ways and the links themselves can be extensively defined and described. Sophisticated hypertext authoring programs can allow students to create nodes and links (ideas and relationships) in a freer representational environment than most of the individual writing environment programs described above, allowing the external manifestation of thought to resemble more closely the (presumed) chaotic neural network activity of the brain. Hypertext used in this way can more easily bridge the gap between the terribly complex and messy operations of thought and the rigid ranks of formal prose.
A subset of hypertext programs, interactive fiction, allows the reader to choose how to navigate through the elements of a story. Beginning, middle, and end become irrelevant, as does sequential cause and effect, exposition, and closure. How one proceeds through the novel depends on the choices the reader makes along the way. Some interactive fiction not only allows the reader to choose an alternative narrative order or point of view, but actually encourages the reader to add new words, to not only rewrite the text in the post-structuralist sense of providing a unique reading, but to literally contribute new sentences, paragraphs, and pages to the story. The most pedagogically interesting aspect about hypertexts, whether they are meant as documents to be read only or as authoring tools that encourage students to write original hypertexts, is that they challenge the students' presumptions about text and writing itself. Even if students must return to writing traditional, sequential text, as is almost always the case, they have been most dramatically made aware that the text and writing they are familiar with are in fact the results of convention, and that not only the words, or the syntax and choices of usage, but even the sequential nature of text itself is simply one possibility among alternatives. Hypertext software therefore encourages flexibility in drafting and revision without presenting heuristic rules or set procedures.
Individual writing environment programs reinforce a traditional, almost classical concept of how ideas should be generated and organized, but hypertext writing environment programs present a much more radical concept of idea invention and organization, one not to everyone's taste. Traditional concepts of good writing have always emphasized completeness, unity, satisfying closure, all ensured by the rhetorical skill and authority of the writer. Hypertext documents, on the other hand, privilege the choices the reader makes, and by so doing subvert the writer's ability to provide unity and closure. In hypertext writing, the writer is not channeling the reader through the writer's process of discovery and conviction, but is instead stimulating the reader to the reader's own discoveries and convictions. Those who promote the use of hypertext software for writing instruction feel that it is the latter process which better encourages student writers to break free of the thick, static, unengaging prose of most writing classrooms. Others feel that the apparently arbitrary nature of how hypertexts are read, and the insignificance provided closure or controlled exposition, condemns hypertext to being a fascinating mental toy. A third, more adventurous, group feels that hypertext is not simply an inventive means of teaching traditional writing, but rather that it is a new medium for text that will eventually exist as the principal means by which words and ideas are written, stored, and read. For these people, hypertext is not simply a way of reflecting upon current text forms, but a new and better text form itself, a new technology of expression, one which should be taught to prepare our students for the future.
Simple hypertext (if hypertext could ever be called simple) has
been combined with other presentational media, such as CD ROM
players, videotape players, and even networks, to create an extension
of hypertext called hypermedia. Hypermedia allows the
hypertext choices of the user to be put to the screen or projected
for public viewing as data from local CD ROM and networked databases,
electronic snapshots, animated graphics, and even videotape.
Hypermedia, as a reading or presentational device, appears to
threaten the sequential nature of text less than hypertext, and
indeed resembles more a user-friendly means of assembling various
pictures, sounds, and text for what is usually thought of as audio-visual
presentation. Hypermedia used as an authoring tool, however,
allows students to write essays which include, wherever the author
wishes to place them, sounds and pictures and data as a sort of
active footnote. A student writing an essay about the civil rights
era, for instance, could include an icon at the end of a sentence
which, if selected by the reader, would bring to the screen a
videotape sequence from a speech given by Martin Luther King.
Another essay might allow the reader to click on an icon embedded
in an essay describing the American Stock Exchange and view the
up-to-the-minute Dow-Jones average figures streaming across the
screen. As hypermedia increases in sophistication, the distinction
between written text and the texts implicit in every other technological
means of expression may blur, and in this sense hypermedia may
in the end represent as radical a challenge to what people think
of as writing as does hypertext. As I have argued elsewhere in
this document, no matter how disturbing the evolving nature of
text as influenced by electronic technology may be, writing instructors
ignore such an evolution at their own risk. It will happen, with
or without the approval and guidance of our discipline.
We writing instructors who write instructional software like to think that it is our idea, our programming code, our hypertext stack that will penetrate to the heart of the universal instructional problem and turn failed students around. We fervently want students' success in our own classes, and we want it in that extension of our classes, our software. But writing is not a mistake to be corrected, something broken to be fixed, a gap to be filled, or a wrong to be righted. Writing is a skill that comes out of need and practice and attempts and reactions that are experienced only by those who can taste, however slight, the possibility of victory. No software, no "method" ensures that.
But some software performs better than others. I would put the imprimatur for good software as that which makes the student solve the single most important problem for writers: How can I make the reader keep reading? Software that "fixes" grammar or spelling or lack of organization is negligible, in my opinion. On the other hand, software that makes a student recognize that words, even after they are written, can be beaten, pounded, and kicked into vastly different shapes and to different effects, is good software. Software which makes a student recognize that, in writing, nothing is inherently right or wrong, that beginnings can be endings, endings can be beginnings, middles can be all over the place, and the only success lies in making the reader come alive while reading, is good software.
Within that requirement exists a variety of software: software
which helps writers to invent, which supports instructor and peer
feedback, which supports data collection and collating for research
documents, which allows network distribution of texts for collaborative
instruction, which supports the instructor's development of hypermedia
presentations and courseware, and software which challenges the
student's notion of text itself. All software decisions must
be tested against the instructor's own self-conscious and informed
understanding of the goals of writing instruction.
Fred Kemp is Assistant Professor of English at
Texas Tech University in Lubbock, Texas.
Burns, H. (1979). Stimulating rhetorical invention through
computer-assisted instruction. Unpublished doctoral dissertation.
The University of Texas at Austin.