Gail E. Hawisher
Since 1981, several studies have appeared that attempt to assess the effects of word processing and strategies of various groups of writers. To realize the inroads that have been made in this research, we need only consider the following projects: (a) Collier's (1982, 1983) case studies of four nursing students; (b) Bridwell, Johnson, and Brehe's (in press) case studies of eight published writers; (c) Daiute's (1984, 1985, 1986) work with junior high students; (d) Bridwell, Sirc, and Brooke's (1985) case-study investigation of five college students in upper-division writing classes; and (e) Lutz's (1983) seven case studies of experienced and professional writers. Despite emphasizing students, their writing, and computers, these and other studies are decidedly different in research design, in method of data collection, in variables examined, and in the analysis of data. Yet they are frequently cited as though they are comparable. We read, for example, that in Collier's (1983) study
"revision tended to stay on the surface level, with no significant changes on the global level" and that "Gerrard (1982) obtained similar results with forty-four students" (Etchison, 1986, p. 4). But there is no mention that Collier used a revision classification scheme for revisions or that Gerrard's inquiry relied on self-reports--or that both these early studies focused on students revising prewritten texts rather than revising texts composed with word processing.
The purpose of this paper is not to criticize pioneering research in our field but, instead, to review studies of writing and word processing since 1981. After examining twenty-four studies, I then suggest issues and guidelines for future research so that we can continue to learn about the effects of word processing on writers and their products.
Two criteria were used in selecting these studies (see Table 1): first, the research needed to examine more than students' conclusions toward a computer; and, second, it needed to concentrate exclusively on the effects of word processing rather than other kinds of software programs. Drill and practice courseware or stylistic programs, such as WRITER'S WORKBENCH or GRAMMATIK, may affect writers and their texts, but they were not the focus of this overview.
The first element of the research setting concerned the methodology. Many of the twenty-four studies reported here are case studies which can reveal a wealth of detail
p. 8 and 9
[pages 8-9 contain Table 1]
through extensive description of the composing processes of writers and of their relationship with computers. These studies may also lead to insights which, when examined subsequently through experimental research, enable us to generalize with more confidence to other members of the population of interest. Of the studies reported here, nine employed case-study techniques, nine were experimental, five exploratory, one ethnographic, and three used survey methods. The terminology in this paper is the same as that reported in the studies. When the researcher didn't name the methodology, the term "experimental" was applied if the researcher manipulated the sample and assigned subjects randomly; if the subjects weren't assigned randomly to groups, the term "exploratory" was used.
Some of the research, such as the Bridwell, Sirc, and Brooke (1985) study, the Daiute (1985) study, and the Selfe (1985) study, employed two or more strands of research to corroborate findings. For example, Bridwell, Sirc, and Brooke (1985) used case-study methodology for five upper-division students and also employed a survey of 48 students from the same population to generate additional data. In the Selfe (1985) study, a combination of survey, observation, and case-study techniques were used to study a sample of college students. Daiute (1984, 1985) also culled two case studies from her sample of eight children.
The size of the sample in these twenty-four studies ranged from two to ninety-six subjects and included writers as inexperienced as sixth graders, as competent as graduate students, and others as proficient as published journalists
and novelists. In two studies (Gould, 1981; Haas and Hayes, 1986), the subjects were not only accomplished writers but also adept users of a word-processing package, with several years experience.
The next element in the research setting concerned the tool the subjects employed. Five studies used Apples, five IBM-PC's, five some form of a mainframe, two TRS-80's, another AES C-20 stand-alone computers, and the rest of the twenty-four studies didn't specify the type of computer. Interestingly, none of these studies examined the effects of the Macintosh. Although the research reported here is the most recent to date, the Macintosh was probably not available when these studies began.
Six studies did not specify which word-processing package was employed. Of the remaining, four used WORDSTAR; three the 1982 version of the BANK STREET WRITER; five word-processing packages that accompanied the particular mainframe or stand-alone computer as with the AES used in Collier's (1982, 1983) study; three programs especially designed by the researchers as with CATCH in Daiute's (1984, 1985, 1986) studies and WRITER'S ASSISTANT in the research of Levin, Riel, Rowe, and Boruta (1985). WORD PERFECT and VOLKSWRITER were two other programs mentioned in the studies.
The demands or difficulty of the word-processing package were usually not described although Collier (1982, 1983) states in his research that the machinery was complex and often difficult for students. He writes of
three and four keystrokes sometimes being required to perform one operation. We aren't told, however, whether it's a line or a screen editor. Gould (1981), in contrast, describes the line editor in his study, as well as its constraints. He even goes so far as to suggest that he might have obtained different results with a full-screen editor.
While not a line editor, the early BANK STREET WRITER also presents difficulties for writers. A word processor that forces writers to leave the insert mode for editing and then re-enter it for writing emphasizes a linear rather than recursive process of composing. Not being able to edit while writing is a serious shortcoming for many writers.
A number of studies did not distinguish between writers composing at computers or entering prewritten texts. The latter perhaps makes the difficulty or demands of the word processor less important because while entering text already composed, writers don't need to think of what they want to say at the same time they're grappling with the word-processing package. Three studies seem to deal with writers composing exclusively at computers--Catano's (1985) study of two novelists; Haas and Hayes's (1986) study of 15 faculty, administrators, or computer scientists; and Sirc's (1986) study of two published graduate students.
Several of the other studies such as Bridwell, Johnson, and Brehe's (in press) concentrated on writers making the transition from a typewriter or pen and paper to a
computer. The researchers, in other words, were concerned with how word processing affected writers' composing habits and how writers moved from composing with traditional tools to composing with a computer. This is also true of Selfe's (1985) research. In one study, students didn't compose with computers. Miller (1985) notes that students wrote all first drafts with pen and paper and revised at computers. For other studies, however, it's difficult to determine whether writers composed at computers, transcribed at computers, or used some combination of strategies.
Also included in the elements of the research setting is the amount of time writers worked with word processing before they produced the texts that were subsequently analyzed. For the most part, it was again difficult to determine this piece of information, but the word-processing experience of the subjects seemed to range from one hour to fifteen years.
In Gould's (1981), Selfe (985), and Haas and Hayes's (1986) studies, the researchers were looking at experienced users of word processors. Gould's sample was fifteen research professionals at IBM who had used word processing for "years." In Selfe's study, no student had worked with word processing for fewer than five months, and Haas and Hayes reported that subjects had worked with word processing for an average of ten years. Yet other studies reported as little as one-hour training with computers (Kurth, 1986), and some didn't report this information. It would seem that the longer writers work with word processing, the less likely the computer is apt to be an obstacle between the writer and his or her writing. The
more writers use a particular word-processing package the more they internalize its operations so that keystrokes, mouse movement, and other functions become as much a part of them as pen or pencil--perhaps more.
In addition to the particular computer and word-processing package, the type of writing required and the number of writing tasks were also examined. These tasks ranged from one piece of writing for a control and experimental group (Pivarnik, 1985) to an unlimited (and unspecified) amount of writing for the two novelists in Catano's (1985) research. The kinds of writing also varied but, for the most part, included "school-sponsored writing" (Emig, 1971) that was usually of a transactional nature (Britton, 1978). For the younger writers, expressive writing was common (Daiute, 1984, 1985, 1986; Levin, Riel, Rowe and Boruta, 1985). Two of the studies used whatever writing the subjects happened to be working on: the novelists in Catano's (1985) study and the published, graduate students in Sirc's (1986) study.
These twenty-four studies also differed widely in the variables they examined and the types of analysis used with both the writers and their writing. Researchers most often studied the attitudes of writers but also looked at the effect of word processing on errors, the number of words, the frequency and kinds of revision, and the quality of the writing. The instruments
used to gauge these, however, were many and varied, and, as a result, the findings are difficult to compare.
There is much evidence from studies looking at attitudes toward computers that indicates writers like word processing and often have a strongly felt sense that they write better with computers. In addition, Kurth (1986) and King, Birnbaum, and Wageman (1984) examined attitudes toward writing among students rather than just toward word processing. Kurth found that subjects had more positive attitudes toward writing whereas the King, Birnbaum, and Wageman study indicated that subjects did not have more positive attitudes toward writing. In the latter study, both the experimental and control group showed a slight increase in positive attitudes after the composition course. The assessment instrument was the Emig-King Writing Attitude Scale (1977). Other studies (Bridwell, Sirc, and Brooke, 1985; Gerrard, 1982, 1983; Levin, Riel, Rowe, and Boruta, 1985) concerned themselves with students' reactions toward writing with computers and elicited positive responses from students.
The number and kinds of errors was another variable often examined by the researchers. Several of the studies looked at errors in punctuation, and four found fewer errors in final drafts with computers (Daiute, 1984, 1985, 1986; Duling, 1985; Levin, Riel, Rowe, and Boruta, 1985; Womble, 1985). But although these studies focused on errors, the results, in effect, differed. Womble (1985) found that one of three students had fewer editing errors; Levin et al. (1985) found that students both made and corrected more surface errors (which resulted in the final drafts having fewer errors); and Daiute (1985) found that on the post-test students not only corrected more errors but also made fewer errors working with computers. Although in all these studies final drafts exhibited fewer errors, committing and correcting more surface errors is different from making fewer errors. Increased attention to errors, for example, might well detract from students' thinking, thereby lowering the overall quality of a piece of writing. Interestingly, none of these studies examined the different kinds of errors that might be characteristic of writing with computers--having the same phrase, for example, at both the start and end of a sentence because the writer forgot to delete one.
Sixteen of the twenty-four studies looked at revision and the effects word processing seemed to have on students' revising. Of the sixteen,
six reported increased revision among students; two reported that some of the subjects revised more--others less; three reported that the writers didn't revise more frequently; and one reported less revision with word processing alone but more when computer prompts were added to the treatment.
Another study (King, Birnbaum, and Wageman, 1984) wasn't concerned with frequency of revision but instead looked at the maturity level of the revision strategies. Rewriting from the start, for example, was considered less mature than striking a balance between adding and deleting text while holding onto some of the writing from the initial draft. Although this study found that more students in the computer group engaged to a greater extent in a revising pattern which kept old text and added new, the labeling of these strategies as "mature" or "less mature" doesn't take different revision strategies for different writers into account. Revision processes vary, and rewriting can be a viable, productive strategy (Flower, Hayes, Carey, Shriver, and Stratman, 1986).
Other studies also examined the kinds of revisions writers made and used complex taxonomies. Five relied on the Faigley and Witte (1981, 1984) scheme, which distinguishes between changes relating to meaning and changes relating to surface features. Rather than restricting the kinds of changes writers make to syntactical units, Faigley and Witte (1981, 1984) added a semantic component to revision research. According to their classification scheme, revisions that add new information to a text are considered "meaning changes" whereas changes in format and mechanics, or revisions to
segments of meaning that can be inferred from the text, are "surface changes." In this way, the two researchers classified all text alterations as either "meaning changes" (microstructure or macrostructure) or "surface changes" (formal or meaning-preserving).
It would seem that the results of these five studies on revision would be comparable since they used the same taxonomy. Interestingly, however, the findings are often not reported in terms of the Faigley and Witte (1981, 1984) revision scheme. Neither Lutz (1983) nor Daiute (1984, 1985) wrote of revision changes using the four categories. Harris (1985) reported fewer macrostructure changes for the word-processing group, using the first 250 words of the student's essays for analysis. But what a writer does in an introduction is different from other kinds of revisions, and the results might not be applicable to other parts of an essay. In not capturing in-process revisions, Harris's study might also have missed macrostructure changes made with word processing. Hawisher's (1985) study, similarly, did not capture in-process revision. The first-year students in this study, however, made more (but nonsignificant) macrostructure changes with word processing. These students also made a greater frequency of formal, meaning-preserving, and microstructure changes with conventional tools than with computers, but the differences were not statistically significant. Perhaps Lutz and Daiute (1984, 1985) didn't report the differences using Faigley and Witte's categories because there were no discernible differences in the word-processing and computer conditions.
Thirteen of the twenty-four studies also assessed the quality of the writing with and without word processing. Of the thirteen, seven reported employing raters to evaluate quality; and of the seven, five (Etchison, 1986; Haas and Hayes, 1986; Hawisher, 1985; Pivarnik, 1985; Daiute, 1986) presented the estimated reliability of the scoring.
Only four noted improved quality with computers. Haas & Hayes's (1986) study is intriguing in that the writing produced at an IBM-RT work station with a large-screen monitor was judged of higher quality than writing produced in two other conditions. Interestingly, the texts written with a PC received the lowest ratings, even lower than the pen and paper samples. Etchison, on the other hand, reported improved ratings with standard word processing; he found that the experimental group improved by almost three points in mean scores and that the control group improved by less than one point. But it is important to note that the experimental group's pre-tests averaged two points lower at the start than the control group's. Thus, the mean of the post-test scores of the two groups differed by less than one point. To say that the experimental group improved more than the control group is to ignore the threat of regression. In other words, tests of lower-scoring groups tend to increase more than other group's scores as they move toward the mean. Therefore, the three-point improvement in mean scores may not be significant.
But other studies also reported increased ratings. Hawisher (1985) found higher (although nonsignificant) ratings on the first drafts of the computer essays but remarkably similar ratings on final drafts with word-processing and conventional methods. Whatever advantages word processing seemed to offer initially were apparently minimized with between-draft revising. Daiute (1986), on the other hand, reported that students scored slightly higher on their first drafts with pen than with word processing. But on their revised drafts, the reverse occurred with the word-processing essays receiving higher ratings than those essays revised with pen.
In her study of basic writers, Pivarnik (1985), using one piece of writing as a post-test for both the experimental and control group, obtained significantly higher scores for eleventh-grade students with word processing. When she attempted to verify the higher scores with another piece of writing, she obtained similar results. King, Birnbaum, and Wageman (1984) also found higher scores in content, organization, and sentence variety/completeness with the experimental group although the standard-usage scores for these students were not higher than the control group's. Both these studies suggest the possible value of word processing for basic writers.
What, then, has been learned in the past few years about the effects of word processing on writers and their products? In attempting to synthesize findings, there is great danger, as
with all research, of trying to compare the incomparable or in generalizing too broadly to other populations. Sixth-graders' (Levin, Riel, Rowe and Boruta, 1985) strategies regarding errors, for example, are probably not comparable to the writing processes of the novelists Catano (1985) studied. It also makes a difference as to whether the research examines student populations of writers learning to write or expert populations of writers practicing their craft. (1) To look at the effect of computers in an environment in which they are used to teach writing is probably not the same as examining them in an environment in which they are used to produce writing. In addition, even when researchers carefully define the focus and use the same instrument, such as the Faigley and Witte (1981, 1984) revision taxonomy, they often report the findings in such a way that the results are not comparable.
Yet despite the dissimilarities of the subjects, research methodologies, and analysis tools, some tentative conclusions emerge. Most writers--regardless of age or status--enjoy writing with computers and perceive word processing as a boon to their composing strategies. If research in word processing is to move forward, the implications of this positive attitude and its effect on writing classes need to be examined. Other results can also inform future research. The foregoing discussion suggests the following issues and guidelines for study:
Although the effects of computers have been studied intently for the past five years or so, we have probably only touched upon their influence on writers and the activity of writing. Menu-driven systems in which users appear to move deeper and deeper into the bowels of a machine before they can retrieve old text and write new illustrate how computers differ profoundly from the old technology of typewriters. Yet, twenty years ago, research into the effects of typewriters on students and writers received some attention. Consider, for example, the following statement which appears in Research in Written Composition (Braddock, Lloyd-Jones & Schoer, 1963):
These words echo our concerns regarding word processing and writing, but those of us who write and teach with computers realize we are dealing with a tool that is at once more powerful and more formidable than a typewriter, even in its most advanced form. If we are to make progress in our research so that studies into written composition and word processing are more than temporary explorations, we must continue to examine the effects of computers. But our research must be systematic and reflective, evaluating what we have learned in the past as we move toward the future. Without such an assessment, we have a confusing array of results that are often misinterpreted.
1. I am grateful to Christina Haas for this useful distinction.
Braddock, R., Lloyd-Jones, R., & Schoer, L. (1963). Research in written composition. Champion, IL: National Council of Teachers of English.
Bridwell, L.S., Johnson, P., & Brene, S. (in press). Composing and computers: Case studies of experienced writers. In A. Matsuhashi (Ed.), Writing in real time: Modelling production processes. New York: Longman.
Bridwell, L.S., Sirc, G., & Brooke, R. (1985). Revising and computing: Case studies of student writers. In S. Freedman (Ed.), The acquisition of written language: Revision and response. Norwood, NJ: Ablex.
Britton, J. (1978). The composing processes and the functions of writing. In C. R. Cooper and L. Odell (Eds.), Research on composing: Points of departure. Urbana, IL: National Council of Teachers of English.
Catano, J. (1985). Computer-based writing: Navigating the fluid text. College Composition and Communication, 36, 309-316.
Collier, Richard M. (1982). The influence of computer-based text editors on the revision strategies of inexperienced writers. (ERIC Document Reproduction Service No. ED 266 719.)
Collier, Richard M. (1983). The word processor and revision strategies. College Composition and Communication, 35, 149-155.
Daiute, C. (1984). Can the computer stimulate writers' inner dialogues? In W. Wresch (Ed.), The computer in composition instruction. Urbana, IL: National Council of Teachers of English.
Daiute, C. (1985). Do writers talk to themselves? In S. Freedman (Ed.), The acquisition of written language: Revision and response. Norwood, NJ: Ablex.
Daiute, C. (1986). Physical and cognitive factors in revising: Insights from studies with computers. Research in the teaching of English, 20, 141-159.
Duling, R. (1985). Word processors and student writing: A study of their impact on revision, fluency, and quality of writing. (Doctoral dissertation, Michigan State University, 1985). Dissertation Abstracts International, 46, 3535A.
Etchison, C. (1986). A comparative study of the quality and syntax of compositions by first year college students using handwriting and word processing. Unpublished manuscript.
Emig, J. (1971). The composing processes of twelfth graders. Urbana, IL: National Council of Teachers of English.
Faigley, L. & Witte, S. (1981). Analyzing revision. College Composition and Communication, 32, 400-414.
Faigley, L. & Witte, S. (1984). Measuring the effects of revision on text structure. In R. Beach & L. S. Bridwell (Eds.), New directions in composition research. New York: Guilford.
Flower, L., Hayes, J. R., Carey, L., Schriver, K., & Stratman, J. (1986). Detection, diagnosis, and the strategies of revision. College Composition and Communication, 36, 323-330.
Gerrard, L. (1982). Using a computerized text editor in freshman composition. (Report). Los Angeles, CA: UCLA Writing Programs. (ERIC Document Reproduction Service No. ED 192 355).
Gerrard, L. (1983). Writing with Wylbur: Teaching freshman composition with a mainframe computer. (Update). Los Angeles, CA: UCLA Writing Programs. Unpublished manuscript.
Gould, J. (1981). Composing letters with computer based text editors. Human Factors, 23, 593-606.
Haas, C. & Hayes, J. R. (1986). Pen and paper vs. the machines: Writers composing in hard copy and computer conditions. Pittsburgh, PA: Carnegie-Mellon Technical Report No. 16.
Harris, J. (1985). Student writers and word processing: A preliminary evaluation. College Composition and Communication, 36, 323-330.
Hawisher, G. (1985) The effects of word processing on the revision strategies of college students. (Doctoral Dissertation, University of Illinois).
Herrmann, A. (1985). Using the computer as a writing tool: Ethnography of a high school writing class. (Doctoral Dissertation, Teachers College, Columbia University).
Hillocks, G. (1986). Research in Written Composition. Urbana, IL: National Council of Teachers of English.
King, B., Birnbaum, J., & Wageman, J. (1984). Word processing and the basic college writer. In T. Martinez (Ed.), The written word and the word processor. Philadelphia, PA: Delaware Valley Writing Council.
Kurth, R. (1986). Using word processing to enhance revision strategies during student composing. Paper presented at the 1986 American Educational Research Association Conference, San Francisco, CA.
Levin, J., Riel, M., Rowe, M., & Boruta, M. (1985). Muktuk meets jacuzzi: Computer networks and elementary school writers. In S. Freedman (Ed.), The acquisition of written language: Response and revision. Norwood, NJ: Ablex.
Lutz, J. A. (1983). A study of professional and experienced writers revising and editing at the computer and with pen and paper. (Doctoral dissertation, Rensselaer Polytechnic Institute, 1983). Dissertation Abstracts International, 44. 2755A.
Miller, S. (1984). Plugging your pencil into the wall: An investigation of word-processing and writing skills at the middle school level. (Doctoral dissertation, University of Oregon, 1984). Dissertation Abstracts International, 45. 3535A.
Pivarnik, B. (1985). The effect of training in word processing on the writing quality of eleventh grade students. (Doctoral dissertation, The University of Connecticut 1985). Dissertation Abstracts International, 46. 1827A.
Rodrigues, D. (1985). Computer and basic writers. College Composition and Communication, 36, 336-339.
Selfe, C. (1985). The electronic pen: Computers and the composing process. In J. Collins & E. Sommers (Eds.), Writing on-line: Using computers in the teaching of writing. Upper Montclair, NJ: Boynton/Cook.
Womble, G. (1985). Revising and computing. In J. Collins & E. Sommers (Eds.), Writing on-line: Using computers in the teaching of writing. Upper Montclair, NJ: Boynton/Cook.