Jane Zeni Flinn
Since 1984, the Gateway Writing Project (GWP) at the University of Missouri-St. Louis has been researching the role of computers in teaching writing. As project director, I worked with elementary and secondary teachers on case studies and on collaborative research. We wanted to know how students used electronic writing tools, and especially how they approached revision. We needed to capture the whole composing process-- including the false starts, the accidental deletes, and the pauses for rereading. With pen and paper composing, we could simply ask writers to save their drafts and make all changes with a single strikeout in contrasting pen. With word processing, however, all in-process changes would vanish in electronic amnesia.
I therefore commissioned the development of software to record every keystroke and then replay a composing session on the monitor for the writer to view and discuss. The procedure was borrowed from the current project at the University of Minnesota (Bridwell and Duin, 1985), but the software and hardware were
appropriate to the pre-college level students. COMPTRACE, as we call our program, was designed by programmer John Oberschlep as a modification to the MILLIKEN WORD PROCESSOR (Thomas & Thomas, 1985) for the Apple IIe (See our companion article.). COMPTRACE proved highly successful in our work with writers much younger than those in Minnesota. It served two quite different functions: recording everything in a text helped trace the revision of writing; replaying a composing session for an interview helped trace the revision of thinking.
COMPTRACE was field-tested with two sixth-grade teachers whose classes had regular access to computers. Both teachers had first participated in the GWP through the 1980 summer institute in the writing process at the University of Missouri-St. Louis. They returned to the university in summer 1984 for an additional institute on teaching writing with computers.
These teachers were part of a larger research team of GWP teacher/consultants who logged observations of their own students, collected folders of writing in various stages of completion, and met monthly to share insights, problems, and solutions. Our sample, then, included only experienced teachers who were writing specialists and were aware of the computer's potential as a writing tool. This was a deliberate bias. We did not view the computer as a "treatment" or as a magical writing machine. We saw it as a support to a good composition program. As project director, I visited classrooms throughout the school year, gathering field notes and interviews. The sixth-grade teachers invited four writers of various ability levels to assist in the research. These children became the subjects of case studies of the writing process.
At the start of the project, I explained to the case study subjects that we needed their help in understanding how they went about writing and revising with the computer. Word processing was a new tool for student writers, and while I thought it was a good one, I did not know exactly what its benefits might be nor what new problems students might encounter. Would they help by keeping all their drafts and printouts in a folder, sharing some of them, and
telling me the story of how they developed a particular piece of work? I added that I had a special word-processing disk which could record as they wrote and then give an "instant replay" of their writing--just like a replay on TV of a sports event. All four students quickly agreed to cooperate (and obtained parental permission).
During the course of the year, these four students wrote stories, poems, letters, and descriptions while COMPTRACE recorded their keystrokes. An important feature of the program is that it does not in any way interfere with the process it records. For this reason, it is appropriate even for the young and inexperienced writers. We made separate copies of the modified word-processing program for each case study writer, each copy marked with a student's name. When our subjects wrote, they booted up their own programs and inserted their own file disks as usual. An adult needed to be present only at the end of the composing session, to switch disks and type in the commands to save the keystrokes. The keystroke file holds up to 30 minutes of composing--more time than most students were able to write at the computer in any class period.
Using COMPTRACE, then, does not disrupt the normal activities or schedule of the writing-workshop classroom. Unlike laboratory research with thinking-aloud protocols (Swarts, Flower and Hayes, 1984), COMPTRACE does not require taking students away from the supportive context in which they are learning to write. Keystroke records can even be made while two writers compose a collaborative draft.
After their composing sessions, the case study writers met with me individually to view and discuss the replays. Sometimes this happened just after the children finished writing; more often they returned during lunch, recess, or the next day's class for interviewing. Scardamalia and Bereiter (1983) point out that instead of "subjects," case study children should be viewed as "co-investigators," people who have access to important information. They also suggest that children have some concrete task to discuss in an interview--rather than such general questions as "How do you decide what to write?" We found sixth-graders, while watching their words replay on the screen, became quite articulate subjects for case-study interviews.
COMPTRACE proved highly effective in guiding our sixth-graders to recall their own cognitive decisions. The blinking cursor and the
scrolling text showed the focus of the writer's attention as well as the order in which changes had been made. While watching the display, children could recall and share the story of their own writing experience. These interviews were then taped and used for "retrospective protocol analysis" (Bridwell and Duin, 1985), a close study of the writer's thinking and decision-making process.
An equally helpful application of COMPTRACE is the ability to save and reproduce an exact record of the revision process at the computer. COMPTRACE has a "List" command that produces a printout of every keystroke the writer makes in a composing session. Each letter is printed, and symbols take the place of commands such as "Control" or "Delete." These data run vertically down the page, with the timing in tenths of a second listed nest to each keystroke. The time records show where in a text the writer paused and for how long, simplifying the kind of research done by Matsuhashi (1981) and others on pausal patterns.
I used the printed keystroke records to supplant sets of printouts, often marked with penned-in revisions. These data produced a complete picture of the changes writers made in the text.
These changes were analyzed with the revision coding system designed by Bridwell (1979) and later revised for research on composing with computers (Bridwell, Sirc & Brooke, 1985). Bridwell classifies text changes by level (surface, word, intra-sentence, sentence, multi-sentence) and by operation (such as addition, deletion, substitution). I developed a coding sheet listing these categories and conferred with Bridwell on the interpretation of ambiguous cases and on decision rules.
To appreciate how COMPTRACE supported our research requires a closer look at the data it generates and at the kind of analysis applied to these data. Although GWP teachers encouraged students to choose their own topic and genre, this report deals with a more structured assignment designed for the research. Students were asked to revise and improve a short story based on a draft actually written by another sixth-grader.
To develop this revision task, we had altered the original draft by planting flaws in mechanics (twelve errors), wording (ten dull or
redundant words), sentence structure (one fragment and one run-on), and organization (weak introduction, weak conclusion). Our methods were indebted to the work of Calkins (1980) with third graders and Flower et al. (1986) with college writers. We scored the papers by comparing the revisions with the original to see which flaws each student recognized, which they succeeded in repairing, and which unneeded or erroneous changes they made.
Papers were also scored holistically for overall quality following the procedures recommended by White (1985). The lowest score (1) was reserved for the original story, or for "revisions" that added as many problems as they corrected. The top score (4) was awarded for "good development of story" along with "good editing of mechanical errors." Finally, we compared the data from COMPTRACE with the holistic and error-analysis scores in an attempt to relate process and product. We could see, for example, the kinds of revisions a writer used to develop a top-rated paper from a low-rated draft. In February, after five months of regular experience with word processing, each student in our sixth grade groups attempted the revision task. Writers called up this text, revised for 30 minutes, saved, and printed out
This is a story about Harry, a timid person who is easily scared. One night at midnight, Harry saw some shadows behind the treehouse in the park wich isn't far from the city bank. He couldn't here what they were saying, so he ran for the police because he thogt they might be bankrobbers but he stopped himself and said to himself, "wait I'll see what they are doing first" then he also said, "forget it, man, Im chicken," and ran for the police once more. When he got back with the police, he found out that his friends were waiting to surprise him. They really wanted it to be a surprise. Cause it was his birthday. Now the time was 1:05 A.M. the exact time he was born.
The two central case study writers were a very talented girl ("Mary") and a very low-skilled boy ("Bob"). Both were members of the same class of 15 heterogeneous sixth-graders who spent eight
hours per week with a writing- project-trained teacher. In this classroom, two Apple IIe computers and one inexpensive printer sat on a long table against the blackboard; another table held a couple of old manual typewriters which were often used when the computers were occupied. The mechanical writing tools complemented the teacher's instructional style: the open classroom with a variety of learning centers. This focal class was a natural laboratory for studying how children learn to write when the computer is available on a daily basis as a writing tool, not as a curiosity or a game machine.
As might be expected, the two case study writers approached the revision task in contrasting ways and produced texts that differed dramatically. Using COMPTRACE for interviews with Mary and Bob created a picture of the thinking processes that led to their contrasting revisions. The portraits of each case study writer which follow will discuss first the text, then the interview, and finally the student's model of "good writing" (Flinn, 1986, May) that is inferred from the data.
Mary was a fluent writer and an avid reader. Her California Achievement Test scores placed her in the 97th percentile in Language, the 92nd percentile in Reading (national norms). Her writing folder showed a willingness to draft and revise, often five or six times, before completing a piece to her satisfaction. I was visiting her class and watched from the sidelines when Mary called up "Harry" on her monitor. She first scrolled quickly through the file, and then began to edit. She fixed a few surface errors on the way, but spent most of her time revising style, content, and detail. Here is the 160-word story she completed in 30 minutes:
Harry is a timid person who is easily frightened by little things. One breezy evening at midnight, Harry saw some dim shadows behind the creepy treehouse in the park. The city bank is just around the corner from there. He couldn't exactly make out what they were saying, so he quickly started on his way to the police department. He thought it was mysterious and they might be bank-robbers but he suddenly stopped and said quietly to himself, "Wait I'll see what they are planning first," but then he added, "No I can't I would chicken out before I had a chance," and darted after the police once more.
When he finally got back he dragged the police to where he had seen the robbers, but instead he found his friends waiting to surprise him. He was baffled until they reminded him that it was his birthday. Because now the time was 1:05 A.M. the exact time he was born.
Both raters gave Mary's revised text the top holistic score (4) for improvement in content as well as form. I then analyzed her revision process with Bridwell, Sirc, and Brooke's (1985) typology, using the printed COMPTRACE records. The data show that she made just 5 changes at the surface level, 14 at the word level (mostly additions), 13 at the intra-sentence level (half of them substitutions of phrases or clauses), and 4 at the sentence level. Her focus of attention on the word and intra-sentence levels is a pattern Bridwell (1979) found typical of competent twelfth grade writing. In addition, the error analysis showed that Mary eliminated 9 of the 12 planted mechanical flaws, most of them in the process of rewriting larger units of text.
Mary then discussed her revision with me while viewing the COMPTRACE replay. Asked about her first impressions of "Harry," she began in generalities: "Well, some of it was good, but then other parts of it. . . something was wrong." COMPTRACE showed that Mary had first changed thogt to thought, and she confirmed that the misspellings were obvious targets: "I just noticed that right away." Prompted to elaborate on her more global concerns, Mary added that the story was "sort of bad description," and the opening lines rambled and "carried on." The lead was, in fact, the next place she moved her cursor.
Her first attempt deleted the bland opening clause and added descriptive words: "Harry is a timid person who is easily scared. One creepy night at midnight, Harry saw some dim shadows behind the treehouse in the part which. . ." As she struggled with the opening scene, first it was a creepy midnight and an unsteady treehouse, then a moonlit midnight, and finally a breezy evening and a creepy treehouse. She commented, "I took out 'creepy' because I thought it would sound better by 'treehouse'."
COMPTRACE shows that Mary revised for mechanics, detail, and sentence structure during several complete runs through the text. "I decided to read through it again, and then I saw some more things to change," she explained.
For most revisions, she gave reasons suggested a well-developed "problem representation" (Flower et al., 1986). For example, she changed doing to planning: "Well, 'doing' could be jumping up and down, but 'planning'--they're making plans to get in, so you're going to try to stop them." She wrote He couldn't exactly make out what they were saying instead of couldn't hear "because I think it sounded like he was straining to hear, but he couldn't." She added vivid verbs, darted and dragged, explaining that "The police would probably think it was a prank." Then she noticed a vague pronoun in dragged the police to where he had seen them, and substituted robbers, commenting, "Well 'them' could also be the police." She added a stronger transition, explaining that Wait, I'll see what they're planning first made Harry sound "daring," so she needed a but when he was afraid to follow the strangers. She inserted: He was baffled until they reminded him that it was his birthday, because "You really wouldn't remember that it's your birthday if it's like 1:00 in the morning." She then deleted two short, redundant sentences because "He was surprised, you know.... That was enough."
Mary's understanding of her own writing decisions is quite explicit. Yet even with my probing, she did not justify her revisions with textbook rules or terms. Instead, she gave a rhetorical diagnosis she explained decisions in terms of speaker, subject, audience, or purpose.
This mode of discourse was used consistently by the writing project teachers in this study to discuss student, as well as professional, writing.
By Mary' s own assessment, her final version was "pretty good." But if she could do it again, she would delete the whole lead up to One breezy evening. Her revision process shows a recursive appraisal including local and global issues, always ready to discard the latest text for something better.
Mary's revision of the "Harry" draft suggests that she worked from a rather consistent model of "good writing." Most of her 27 words and intra-sentence changes served to add detail. Throughout the year, her papers and interviews confirmed this love of description. Sometimes she crafted with precise, vivid words (ran to darted); other times, she chose pretty but overused descriptors. In either case, her motto might be stated in this way: "Good writing is descriptive." The computer supported Mary's progress toward this goal. At the keyboard, she could play with word-level substitutions and intra-sentence additions until she created a pleasing description.
Mary's classmate Bob presents a contrasting picture. With California Achievement Test scores in the 10th percentile in Reading and the 11th percentile in Language, Bob had experienced mainly failure and frustration in writing. It was his infatuation with computers, rather than a love of writing, that led him to participate enthusiastically in the case studies.
I watched as Bob worked with the "Harry" task on his screen. Unlike Mary, he directed most of his attention toward surface and word-level changes. Bob's finished story was the same length as the original, 127 words. It incorporated a total of 12 changes:
This is a story about Harry, a wimpy person, who is easily scared. One dark gloomy night at 12:00 am, Harry saw some shadows behind the big tree in the park, wich isn't far from the city bank. He couldn't here what they were saying, so he ran for the police because he thoght they might be bankrobbers but he stopped and said to himself, "wait till I see what they are doing first," then he also said, "forget it" and ran for the police once more. When he got back with the police, he found out that his friends were waiting to surprise him. They wished they could have surprised him because it was his birthday. Now the time was 1:05 A.M. the time he was born.
Bob's "Harry" revision earned a "2" on a 4-point holistic scale. This score reflects minimal improvement in content and only partial correction in mechanics. With the help of the keystroke records, I was able to understand Bob's revision process more clearly. The Bridwell typology shows that he focused on the surface level, where 6 of his 12 changes occurred. In addition, he made 1 change at the word level, 3 at the intra-sentence level, and 2 at the sentence level when he substituted and combined the short-ending sentences. Although most of his attention was devoted to surface features, his corrections were not always accurate; he fixed just 2 of the 12 planted mechanical flaws.
Bob's performance suggests a typical, low-skilled pattern. Calkins (1980) called it refining rather than revising for meaning. Sommers (1980) and Bridwell (1979) saw it among older basic writers who focused on surface correctness but failed to improve either mechanics or overall quality.
He also objected to some of the sentences: "They stopped, I mean the sentences were too short. They should have made a runon sentence for a couple of them." Some other sentences "should have periods. . . to break them up, 'cause you don't want big sentences but you also don't want 20 sentences." Bob's comments suggest a concept of the sentence based on length rather than on syntax, leading to what Flower et al. (1986) call "maxim-based revision." This rough maxim led Bob to correct the sentence fragment, but he was one of the few students who did not, in fact, insert any periods to break up the run-on.
Bob's revision process showed his usual struggle with spelling. While typing he had asked me how to spell "gloomy" (his screen showed gluemy). I referred him to the dictionary, and he succeeded in manipulating the cursor to correct it. Later, watching COMPTRACE, he explained that he hadn't noticed many spelling mistakes at first. As he saw other students working, he looked more closely at the text and saw the errors.
Bob tended to offer vague explanations of his writing decisions, unlike Mary's explicit diagnoses. I asked, "Why did you get rid of Man, Im Chicken?" He replied, "I just didn't like it. I think it was just too long." Again, I asked, "Instead of midnight you made it 12:00 a.m.-how come?" Bob hesitated: "Um, I don't know, it just seemed like it was too long.... 'Cause right here it said one night at midnight. I don't think that sounded too nice because you were using night two times.... I put 'twelve a.m.' because it sounded different." Here Bob started with a loose maxim, but when prompted by the replay and the interview, proposed an explicit redundancy style.
Bob could also justify his sentence combining: They wished they could have surprised him because it was his birthday. The arrival of the police would have spoiled the plan for the surprise party, he explained, making it just a "wish." "I changed that sentence almost all the way," he added proudly.
Most often, however, Bob found revision frustrating:
|Interviewer:||"When you're reading through like this [pointing to the scrolling text], what's going on in your head? What are you looking for?|
|Bob:||"I'm just trying to search.... You read through it, and when you can't find anything, you get furious. I feel like I'm just gonna punch, just punch a hole in the wall."|
|I:||"Are you furious because you know there's something there and you can't see it?"|
|B:||"Yeah. . . sometimes you'll read that word about 16 times--and then you finally find out that it's spelled wrong."|
Bob's goal in revising "Harry" was simply to "fix up mistakes." When asked if he could have improved anything else, however, he volunteered, "This wasn't too long of a story. I think he. . . should have wrote a little bit more instead of ending it so quickly." He could identify with the author's problem: "I know I do that a lot too, you know. I'll get tired of writing a story, so I'll kind of end it real quick."
How would he develop a tale? Bob suggested changing the main character's personality: "I might say that he didn't go to get the police because he was a scared person. He was kind of nosy." I responded, "So he wouldn't have been wimpy anymore, he would have been brave?" Bob agreed with a grin.
If Mary knew "good writing" by description, Bob seemed to know it by length. His maxim for correct sentences was based on length-not too short (fragment) and not too long (run-on). He deleted details if they made the lines "too long." Finally, he judged the merit of a story by its sparse development. Bob's other revision concern was spelling. All his drafts showed a high percentage of changes attacking misspellings. Bob's personal model of good writing might thus be stated: "Good writing has the right length and the right spelling."
The computer reinforced Bob's goals in revision. The computer let him add and delete material easily, adjusting length without recopying the whole paper. It let him fix spelling errors neatly. It gave him the romance of a technological process along with the discipline of a written product.
Software such as COMPTRACE gives the researcher a vivid, close up view of the composing process. The keystroke records, along with a set of marked-up printouts, can show the revision of text. The composing session replay, case study interview, and taped retrospective protocol can show the revision of thought.
Although COMPTRACE was designed as a research tool, the cued interview may enhance learning as well. Current theories of cognitive development support this notion. Daiute (1985) explains that
talking to others is gradually internalized in the skilled writer's "inner dialogue" of planning, revising, and self-monitoring. If students become more aware of their writing decisions, they can learn to manage their own writing processes more effectively.
Several GWP teachers have suggested that the program has potential for classroom instruction. A composing session could be recorded as usual.
Then, with the help of a large video monitor, COMPTRACE could be replayed for an entire writing class.
The teacher could show how a piece of writing came to be, commenting on the strategies used by the writer. Students could see words set down tentatively and then rejected, only to be replaced again. They could see paragraphs expanded, then split, then clarified by moving a line or adding a topic sentence. It is a maxim of teaching writing that we "show, don't tell." With a computer and software like COMPTRACE, teachers can show the writing process in action.
Replaying students' composing sessions is a promising research method which can help writers at any level of development see and articulate the composing process. Because the software is so unobtrusive, it is well-suited to case studies and even to ethnographic research in natural classroom contexts. Researchers can use other kinds of technology to record, to replay, and to list composing data. Each method, however, has certain disadvantages.
Thinking-aloud protocols can, for example, record a running commentary on the composing process-but they may also disrupt that process, whereas COMPTRACE is completely transparent. Researchers at University of California-Los Angeles and at New York University have developed another way to replay the composing process by recording the monitor display directly on a videocassette (Gerrard, Cullen, Cohen, 1986; Grossman, 1986). This procedure has the advantage of working with any software and providing a convenient method of replay-but it does not list or time the keystrokes, a feature which can be useful in analyzing individual students' composing patterns. The listing and timing data can, similarly, be gathered by other methods-by direct observation or with a videotape (Matsuhashi, 1981; Perl, 1979)-but the computer is much more efficient in keeping track of text production.
If all three functions of recording, replaying, and listing are needed, a program such as COMPTRACE seems ideal.
Jane Zeni Flinn teaches at the University of Missouri-St. Louis and directs the Gateway Writing Project.
Bridwell, L. S. (1979). Revising processes in twelfth grade students' transactional writing. Dissertation Abstracts International. 40. 5765A. (UM No. 80-10, 570).
Bridwell, L.S., and Duin, A.H. (1985). Looking in depth at writers: Computers as writing medium and research tool. In J.L. Collins & E.A. Sommers (Eds.), Writing On-line (pp. 76-82). Upper Montclair, NJ: Boynton/Cook.
Bridwell, L. S., Sirc, G., & Brooke, R. (1985). Revising and computing: Case studies of student writers. In S. Freedman (Ed.), The acquisition of written language: Response and revision (pp. 172-194).
Calkins, L. M. (1980). Children's rewriting strategies. Research in the Teaching of English., 14. 330-341.
Daiute, C. (1985). Do writers talk to themselves? In S. Freedman (Ed.), The acquisition of written language: Response and revision (pp. 133-159). Norwood, NJ: Ablex.
Flinn, J.Z. (1986). Composing, computers, and contexts: Case studies of revision among sixth-graders in National Writing Project International., 46. (UM No. 86-02, 959)
Flinn, J. Z. (1986, May). The role of instruction in revising with computers. Paper presented at the Conference on Computers and Writing, University of Pittsburg.
Flower, L., Hayes, J., Carey, L. Schriver, K., & Stratman, J. (1986). Detection, diagnosis, and the strategies of revision. College Composition and Communication, 37. 16-55.
Gerrard, L., Cullen, R., and Cohen, M. (1986, May). WANDAH: Learning to teach with and evaluate computerized writing aids. Presentation at the Conference on Computers and Writing, University of Pittsburg.
Grossman, A. (1986, May). What role can a word processor play in the writing of learning disabled secondary students? An ethnographic inquiry. Paper presented at the Conference on Computers and Writing, University of Pittsburgh.
Matsuhashi, A. (1981). Pausing and planning: The tempo of written discourse production. Research in the Teaching of English, 25. 113-34.
Oberschelp, J. (1985). COMPTRACE. [Software to accompany the MILLIKEN WORD PROCESSOR, developed with the permission of Milliken Publishers, St. Louis.]
Perl, S. (1979). The composing processes of unskilled college writers. Research in the Teaching of English., 13. 317-336.
Scardamalia, M., and Bereiter, C. (1983). The Child as a co-investigator: Helping children gain insight into their own mental processes. In S. Paris, G. Olson, and H. Stevenson (Eds.), Learning and motivation in the classroom (pp. 61-82). Hillsdale, N J: Erlbaum.
Sommers, N. (1980). Revision strategies of student writers and experienced adult writers. College Composition and Communication, 31. 378-388.
Swarts, H., Flower, L., and Hayes, J. (1984). Designing protocol studies of the writing process. In R. Beach and L. Bridwell (Eds.), New directions in composition research (pp. 53-71). New York: Guilford Press.
Thomas, O., and Thomas, I. (1985). THE MILLIKEN WRlTlNG WORKSHOP [Word processing program and related tools. Designed by Iota, Inc., and programmed by John Obershelp for Milliken Publishing Co., St. Louis.]
White, E. M. (1985). Teaching and assessing writing. San Francisco: Jossey-Bass. AUTHOR NOTES
The research reported in this paper was performed under grants from the National Writing Project and the Fund for the Improvement of Postsecondary Education.
The sixth-grade teacher who collaborated on the project and who taught the two case study writers is Margaret Ryan of St. Jerome's School in Bissell Hills, a suburb of St. Louis, MO.
Milliken Publishing Company, St. Louis, allowed programmer John Oberschelp to modify their word processor for the purpose of this research.