COMPUTERS and COMPOSITION 8(1), November 1990, pages 69-79
Today, the computer has been accepted widely in the writing classroom, and researchers have given us a better understanding of its impact on writing. However, results of reported studies have been inconsistent. One area in which the research results have conflicted is revision. Collier (1983) indicated that writers revised more and wrote more using a computer. Lutz (1987) found that computer users "made more changes, at lower linguistic levels; they moved in smaller chunks from one change to the next, and they moved more frequently" (p. 407). In contrast, however, Daiute (1986) suggested that students revised less on the computer, while Beck and Stibravy (1988) found that they revised about the same amount. Hawisher (1987) suggested that, while a word-processing system is neither more effective nor less effective than pen and paper as a revising tool, there is no relationship between the amount of revision the student does and the quality of the document produced.
A potentially important factor in the relationship between word processing and revising has been proposed by Haas and Hayes (1986b) and Egan, Bowers and Gomez (1982); these researchers have suggested that students cannot read the screen as easily as they read hard copy, making macro changes more cumbersome. Furthermore, Lyman (1984) reported that word-processed text can consist of lists of correct sentences, rather than cogently argued texts, and Lutz speculated that the process of scrolling may lead to tinkering that can hurt, rather than help, a document.
While one can disagree about the effects of the computer on revising, perhaps more critical is the research on the effects of word processing on writing quality. Most studies have shown that word processing has no significant effect on writing quality (Collier, 1983; Hawisher, 1981; and Haas and Hayes, 1986b).
Certainly, these studies and others (Hawisher, 1989) have shed light on the impact of the microcomputer on writers. However, according to Hawisher, there have been no published studies that examine the effect of the Macintosh on writing processes; the focus has been on mainframes and on IBM and Apple II microcomputers. Haas and Hayes (1986a) have shown that the hardware students use can affect their writing behaviors and effectiveness. (Students working at advanced workstations with a large screen and a mouse produced better text than those students working on an IBM PC.) Therefore, it is plausible that the Macintosh, with its mouse and pull-down menus but with a small screen, will affect writing behavior and quality.
In addition, current research does not examine computer-literate novice writers. Because Gilfoil (1982) has suggested that users may require some twenty hours to become comfortable with a word-processing system, a student population that has owned computers for more than four years may use the Macintosh in significantly different ways than novice writers who are inexperienced computer users.
This paper reports on a study examining writing on the Apple Macintosh
and on paper by upper-level students who are novice writers but
computer-literate. To gain a better understanding of writing behaviors
using the Macintosh versus using pen and paper, we sought to answer
two questions: 1) Do these writers revise differently, in terms
of the number and types of revisions, when using the Macintosh?
2) Do they produce higher-quality texts on the Macintosh? In addition,
we sought to determine whether this population would produce longer
texts on the computer than they did with paper and pencil techniques,
as previous studies (see Hawisher 1989 for a review) showed with
Twenty students in an undergraduate technical writing course at Drexel University participated in the study. All have owned and used Macintoshes for their entire undergraduate careers (at least three years) at Drexel University. All had completed the three required first-year writing courses and had at least one cooperative-education work experience. As a demographic questionnaire completed by all subjects indicates, all had used their Macintoshes for programming and work in their majors. The data for three students were eliminated from the analyses because these individuals were unable to attend all the writing and revising sessions. Of the seventeen students completing the study, fifteen were electrical engineering majors (or electrical engineering dual majors), one was architectural/civil engineering, and one was biology. Five were seniors, ten were juniors, and two were pre-juniors (Drexel is a five-year cooperative education university).
Subjects wrote two essays, a week apart, during regular class meetings: first, a mechanism description; and, second, a process description. These essays fulfilled course requirements and were written as described in the course syllabus and text. For the writing, the subjects were randomly assigned to one of two groups: for the first assignment, one group wrote on paper while the other group wrote on the Macintosh. For the second assignment, writing conditions were reversed. All Macintosh writing was done using MACWRITE, the word-processing program that was supplied to our subjects when they received their Macintoshes as first-year students. During the class meeting before the experiment began, subjects were instructed in the two forms of description (as part of their technical-writing course requirements). The subjects were told nothing about the experiment; they knew only that they would write in-class descriptions and that their essays would be graded for course credit by the course instructor.
For each description, both groups spent one class period (fifty minutes) writing a draft; two days later, they spent another fifty minutes revising their drafts. For the revising session, each subject writing on the Macintosh was provided with a printout of his or her draft. In addition, all subjects were provided with task instructions, experimental instructions, and blank sheets of paper; when writing on the Macintosh, subjects were given disks with the necessary software. All subjects had access to their textbook and a dictionary.
For the mechanism description, all subjects described a Bic stick ball point pen with green ink. For the process description, subjects were given a choice of four topics: 1) how a microscope slide is prepared, 2) how a four-stroke internal-combustion engine works, 3) how a personal computer operates, and 4) how a Drexel student pre-registers for next term's classes. For both descriptions, subjects were instructed to indicate where they would include graphics, and to tell what kind of graphic each would be. Subjects did not have to provide the actual graphics.
Finally, after completing the task assignment, all subjects completed
a demographic and attitudinal questionnaire covering their typical
writing behaviors, their experience with and attitudes toward
writing and the Macintosh, and their reactions to the experimental
After the experiment was completed, the handwritten descriptions were typed on disk. Two experienced teachers of technical writing were trained as raters and then independently completed four blind ratings of the 34 randomly ordered final drafts: a holistic rating and ratings of the content, organization, and style. For the holistic rating, raters evaluated each text as a completed document, where all elements of text structure, organization, content, and style contributed to an appropriate response to the writing prompt. For the three trait evaluations, the raters focused more specifically. For content, they considered the quantity and quality of technical detail; for organization, they looked at adherence to the course textbook's specification of an introduction/body/conclusion pattern; and for style, they examined coherence, sentence structure, and mechanics. For all the ratings, an 8-point scale was used. Raters were advised to evaluate each paper as falling into one of three categories: superior (~8 points); acceptable (35 points); and weak (0-2 points). In those essays where the two raters were more than one point apart in their score, a third rater resolved the difference. Inter-rater reliability was 0.82 (Pearson's r).
Other dependent measures included essay length and the nature and quantity of the between-draft revisions, on the basis of a taxonomy developed by Faigley and Witte (1981). In their taxonomy, Faigley and Witte define two major categories of revision, each of which is subdivided into two subcategories: first, surface changes, consisting of formal (S1) and meaning-preserving (S2) changes; and second, text-base changes, consisting of microstructure (T1) and macrostructure (T2) changes. Formal (S1) changes include spelling, tense, number, modality, abbreviation, punctuation, and format. The S2, T1, and T2 changes fall into six categories: addition, deletion, substitution, permutation, distribution, and consolidation.
Because we could not record the within-draft revisions made while
students were composing their drafts on the Macintosh, we did
not analyze the within-draft revisions on the handwritten texts.
(Such changes primarily included false starts, spelling errors,
and word substitutions. Few substantive changes were made within
the handwritten drafts beyond the word level.) Writers used a
different color pen for the revising session. This enabled us
to capture the between-drafts changes.
Table 1 shows the key results, according to the writing condition,
for all essays.
|Macintosh condition||pen and paper condition||p|
(8 pt. scale)
The analyses of the between-draft revisions indicate that writers tended to make more revisions when working by hand than they did when working on computers. While subjects made more formal (S1) changes when working on the Macintosh (20.64 vs. 16.78, p<.05), they made significantly more microstructure (T1) and macrostructure (T2) revisions as well as total number of revisions when working by hand. For T1, we found 19.94 (by hand) vs. 13.99 (on computer) revisions; for T2,7.1 (by hand) vs. 4.06 (on computer); and for total revisions, 91.75 (by hand) vs. 81.05 (on computer) (in all cases, p<.05). We could posit that writers saw little need to change text already typed.
When we examined the revision subcategories, we found that subjects made significantly more punctuation revisions between drafts when writing by hand than they did writing on computers (3.4 vs. 1.45, p<.05). However, subjects made significantly more format revisions when writing on the Macintosh than they did when writing by hand: 4.69 vs. 1.6 (p<.01). Although subjects made more meaning-preserving (S2) additions on the Macintosh (14.59 vs. 12.03, p<.05),they made more T2 additions when working by hand (3.4 vs. 1.92, p<.05). Finally, subjects made more deletions and substitutions in all three categories (S2, T1, and T2, p<.05) when revising by hand than they did when revising on computers.
The quality of the content of all papers produced on the Macintosh
was judged superior to the quality of papers produced by hand
(5.36 vs. 4.36 on the 8-point scale), a result that approaches
significance (p<.1). For the process descriptions alone, the
quality of the content was significantly better when the paper
was written on the Macintosh than it was when a paper was written
by hand (6.5 vs. 4.2, p< .05). None of the other rating measures
indicated significant differences for condition. Table 2 shows
the key results, by writing condition, for the process description
alone. What the content difference for condition means is that,
on the Macintosh, writers were able to produce descriptions that
were more complete and covered more of the necessary facts than
they were in papers written by hand. Note that this does not apply
to the overall (holistic) quality of the writing, in which case
condition had no significant effect.
Table 3 shows key results according to topic differences. In terms of topic, the process descriptions were superior in content to the mechanism descriptions (5.21 vs. 4.59, p<.05). This difference is reflected in the number of revisions made. Subjects made a greater number of S1 and S2 revisions in writing the process descriptions than in writing the mechanism descriptions: the total S2 changes (addition, deletion, substitution, permutation, distribution, and consolidation) were significantly greater in number (26 vs. 17.9 revisions, p<.05); the total S1 changes (spelling, tense, number, modality, abbreviation, punctuation, and format) tended towards significance (10.7 vs. 7.8, p<.1). However, we note that these changes are all meaning-preserving. No differences were found in the substantive meaning-changing categories.
|essay length (words)|
(8 pt. scale)
When we examined essay length, we found that, when subjects wrote on the Macintosh, they produced significantly longer essays than when they wrote by hand (521.75 words vs. 448.44 words, p<.01, Table 1), confirming findings of many studies. In addition, the process descriptions were significantly longer when produced on the Macintosh than they were when produced by hand(551.75 vs. 430.78 words, p<.01, Table 2). Coincidentally, however, the two types of description (independent of condition) were almost identical in length: mechanism 480.7 words, process 487.7 words (Table 3).
Our results tend to confirm the findings of earlier studies of the effect of microcomputers on writing: the Macintosh by itself does not significantly improve the quality of writing.
Some key results do emerge. First, the fact that the descriptions written on the Macintosh were significantly longer than those written by hand tends to confirm Hawisher's (1989) report that writing on a word-processing system is physically easier and faster than writing by hand, and that the writer, therefore, produces a longer text. Indeed, according to their questionnaires, our students feel quite strongly that the Macintosh makes writing easier for them and also feel that they write more quickly with it. In addition to the essays being longer, the quality and quantity of the content (the details they provide in their descriptions) of these essays are better than they are in hand-written texts, a result that tends towards significance. We found no significant difference in holistic quality for condition; this may indicate that the computer is not a significant factor in improved quality or that the computer's effect on quality is complex and not yet fully understood. In either case, further research is needed.
Second, subjects made a greater number of format revisions in the Macintosh descriptions than they did in the descriptions written by hand: most of these revisions were related to layout and headings. This finding suggests that writers may not have a clear sense of the layout of their text when they are working without a printed copy. Indeed, Haas and Hayes (1986b) report that experienced writers need a printout to get a "sense of the text." This suggestion can also be confirmed by the greater number of meaning-changing revisions made when working by hand than made when working on computers. The visual representation on the paper seems to lead to an increased willingness to make major changes.
Third, we found more substitutions and deletions between drafts in the hand condition than in the computer condition. One possible explanation for this finding may be that subjects saw less need to make, or were less willing to make, major changes to text they had already typed on the Macintosh than the texts written by hand. Indeed, none of the subjects made more than token S2 (meaning-preserving) revisions on the hard copy printout of their first drafts from the Macintosh. The few such revisions they made were on the screen. The general lack of hand revisions when working on the computer may reflect the writing behaviors of novice computer-literate writers. Such writers are quite familiar with the Macintosh and use it regularly. They may do all their work on the Macintosh with little done by hand. Haas (1987), for instance, reports that the student writers she was examining read hard copy printouts primarily (88% of all instances) to check formatting or to proofread. A second explanation for this difference between the conditions is that the writers could have made such substitutions and deletions while composing their first drafts on the Macintosh. As we did not record within-draft changes (explained earlier), we do not have a record of any such potential changes.
Fourth, the greater number of S2 additions on the Macintosh may reflect the advantage of cleaner copy on the screen: students could see more clearly the need for meaning-preserving additions when reading on the computer than when reading page copy.
Although the two topics yielded almost identical length papers, subjects provided better content in the process descriptions than in the mechanism descriptions. One possible explanation for this is that the process description, because it has a built-in chronological structure, is somewhat easier to write than a mechanism description, which requires a spatial organization (cf. Bridgeman & Carlson 1984; Langer, 1984). Another possible explanation is that the subjects felt more comfortable with the process topics--10 of the 17 subjects wrote about the process of pre-registration at their own university--than with the mechanism topic (the description of the ballpoint pen). Additionally, they were able to choose from a list of process topics. However, these hypotheses do not explain the fact that the process descriptions were not significantly better organized than the mechanism descriptions. Interestingly, the questionnaire indicates that a majority of the subjects preferred writing the mechanism description, despite their resulting weaker performance.
Overall, the results do not support the hypothesis that
the Macintosh descriptions would be better written than the handwritten
ones (although our subjects believe that they write better on
the Macintosh). These results tend to suggest that merely knowing
how to use the Macintosh will not inevitably improve writing.
Other factors may have greater impact; for instance, despite our
subjects' complete familiarity with the Macintosh's word-processing
software (13 of the subjects indicated that they had substantial
word-processing experience on the Macintosh, and all others felt
comfortable with MACWRITE), they have never
received formal instruction in using the Macintosh to improve
their writing skills. Perhaps the fact that the Macintosh is relatively
simple to use has given rise to the mistaken notion that writers
will automatically know how to exploit the tool's potential. But
the microcomputer itself does not lead to good or poor performance;
training in word processing as well as in writing may be the necessary
factor in improving writing skills.
Alexander Friedlander is an Assistant
Professor of Humanities and Communications at Drexel University.
Make Markel is an Associate Professor
of Humanities and Communications at Drexel University.
Balestri, D. P. (1988, February). Softcopy and hard: Wordprocessing
and writing process. Academic Computing, 14-17, 41-45.
Beck, C. E., and Stibravy, J. A. (1988). The effect of
word processors on writing quality. Technical Communication,
Bridgeman, B. & Carlson, S. (1984). Survey of academic
writing tasks. Written Communication, 1, 247-280.
Bridwell, L. S. & Duin, A. (1985). Looking in depth
at writers. In J. Collins and E. Sommers (Eds.), Writing on-line:
Using computers in the teaching of writing (pp. 115-121).Upper
Montclair, NJ: Boynton/Cook.
Bridwell, L. S., Sirc, G., & Brooke, R. (1985). Revising
and computing: Case studies of student writers. In S. Freedman
(Ed.), The acquisition of written language. Revision and response
(pp. 172-194). Norwood, NJ: Ablex.
Collier, R. M. (1983). The word processor and revision
strategies. College Composition and Communication, 34,
Daiute, C. (1983). The computer as stylus and audience.
College Composition and Communication, 34, 134-35.
Daiute, C. (1986). Physical and cognitive factors in revising:
Insights from studies with computers. Research in the Teaching
of English, 20, 141-159.
Egan, D. E., Bowers, C., & Gomez, L. M. (1982). Learner
characteristics that predict success in using a text-editor tutorial.
In Proceedings of the Human Factors in Computer Systems Conference
(pp. 337-40). Gaithersburg, MD: Institute for Computer Sciences
Faigley, L. & Witte, S. (1981). Analyzing revision.
College Composition and Communication, 32, 400-414.
Gilfoil, D. M. (1982). Warming up to computers: A study
of cognitive and affective interaction over time. In Proceedings
of the Human Factors in Computer Systems Conference, (pp.
245-250). Gaithersburg, MD: Institute for Computer Sciences and
Haas, C. (1987). "Seeing it on the screen isn't
really seeing it:" Reading problems of writers using word
processing. Pittsburgh: Carnegie Mellon University Center
for Educational Computing in English.
Haas, C. & Hayes, J. (1986a). Pen and paper vs.
the machine: Writers composing in hard copy and computer conditions.
Pittsburgh: Carnegie Mellon University Communications Design
Haas, C. & Hayes, J. (1986b). "What did I just
say?": Reading problems in writing with the machine. Research
in the Teaching of English, 20, 22-35.
Hawisher, G. E. (1987). The effects of word processing
on the revision strategies of college freshmen. Research in
the Teaching of English, 21, 145-159.
Hawisher, G. E. (1989). Research and recommendations for
computers and composition. In G. E. Hawisher & C. L. Selfe
(Eds.), Critical perspectives on computers and composition
instruction (pp. 44-69). New York: Teachers College Press.
Langer, J. (1984). The effects of available information
on responses to school writing tasks. Research in the Teaching
of English, 18, 27-44.
Lutz, J. A. (1987). A study of professional and experienced
writers revising and editing at the computer and with pen and
paper. Research in the Teaching of English, 21, 398-421.
Lyman, P. (1984). Reading, writing and word processing:
Toward a phenomenology of the computer age. Qualitative Sociology,
Pufahl, J. (1983). Response to Richard M. Collier. College
Composition and Communication, 35, 91-93.