Much has been written in recent years about the use of word processing in the composition classroom. But most of the studies reporting the effects of word processing on writers--including a large empirical study I helped conduct (Etchison, 1985) focus on writers of average or above-average ability. The effect of word processing on writing was a major research question in all 24 studies analyzed by Hawisher (1986), but not one of the 24 studies looked exclusively at basic writers. In reviewing relevant literature for a study he conducted, Nichols (1986) states that "no studies examined only basic writers' use of word processing alone" (p. 82).
Recently, a few researchers have begun to look at how word processing
affects basic writers. Nash and Lawrence (1988) released data
showing that basic writers who use word processing increase dramatically
the amount of text they write, the coherence or connectedness
of sentences, and the amount of evidence used to support points
in a paper. In light of the limited data, l wanted to examine
the effects of word processing on writing quality and the amount
of text produced by basic writers.
To obtain data, a small comparative design study was arranged for the fall semester of 1986 at Glenville State College, Glenville, WV. Two sections of basic writers were involved in the study: one section of students using word-processing software on computers and one section of students writing by hand. These writers came from two distinct backgrounds. Thirty percent of the subjects were minority students from large cities such as Washington, DC, and Cleveland, OH. The remaining subjects were from central West Virginia, an economically depressed area of Appalachia where English ACT scores were at or below 15.
The study posed two questions:
Data were collected from transactional writing tasks. Students wrote one explanatory essay and one persuasive essay at the beginning of the semester and at the end of the semester. At the beginning of the semester, all students wrote with pen and paper. Students were not told they were involved in a study, only that these first two writing samples would be used by the instructors to make some decisions on what would be covered in the class during the remainder of the semester.
Students were given these pre-test writing tasks prior to the class period in which they began drafting because Bridwell (1979) found that doing so seemed to increase involvement on the part of students. Students were given two hours of class time and as much time as they wanted between class meetings to draft and revise each essay. The instructors made no attempt to influence the students' writing process. In fact, both instructors avoided giving any help to students during the pre-test--even when students requested help. The intent was to obtain representative samples of student writing prior to instruction.
Students completed two different post-test writing tasks at the end of the semester: writing within the context of normal class assignments and using the writing processes developed during the semester. Students in the experimental section wrote their papers on computers with word-processing software, the writing tool they had been using all semester, while students in the control section wrote with pen and paper. The four writing tasks used in this study are found in Appendix A.
The pedagogical approach used for instruction during the semester centered on student/teacher conferencing because Clifford (1981) found that such a pedagogy encourages improved writing quality. Classes were conducted primarily as writing workshops in which individual students had conferences with the instructors, while other students worked on their drafts. Multiple drafts were encouraged. Each week, one 30-minute block of class time was spent working on right-branching modification, a syntactic device characteristic of mature writers (Christensen, 1967; Faigley, 1970). Such study is recommended by Hartwell and Bentley (1982) and Christensen (1967).
The two instructors who taught the classes met regularly during the semester to ensure that both the experimental section and the control section were doing the same things during class meetings. Identical writing tasks were used for all assignments in both classes. Furthermore, because the data relied on transactional writing samples, all writing assignments during the semester were transactional.
Instructors made every effort to establish a positive classroom atmosphere--a non-threatening classroom where students were constantly encouraged in their writing. Grades in basic writing classes at Glenville State College appear on students' records, and students receive credit for the classes. However, the credit does not count towards graduation. Hence, the instructors made a conscious effort to use grades to encourage students, and no student was ever given a grade below a C on any writing assignment.
The word-processing students used IBM personal computers and the PC-WRITE word-processing program. Each student received a three-page introduction to PC-WRITE and a training session in the word-processing program one evening, so no classroom time was spent on learning the program. The instructor was always present during class sessions to help students who had any problems mastering the word-processing program. Within two weeks, all students were comfortable with PC-WRITE.
The word-processing section had only one accomplished typist. However, the non-typists did not evidence any frustration at having to proceed with a hunt-and-peck method of typing. During the semester, I asked the non-typists how they felt about using the computers with word-processing software as compared to writing with pen, and all agreed that word-processing software on computers was easier and more satisfying.
As most of us who have taught basic writing know, getting basic writers to write--to produce text--can be a major problem. Basic writers at Glenville State College do not usually rush to writing class, much less begin work on their writing without some encouragement from the instructor. However, during this study, the majority of the students using word-processing software were at their terminals working on their essays five to ten minutes before class was officially scheduled to begin. And when the class period was over, I always had to force students to leave the microcomputer lab so the incoming computer science class could get started. I was surprised (and pleased) at this unexpected behavior. The word-processing software on computers seemed to encourage most students to spend more time producing text as well as working with text in ways not usually seen when basic writers use paper and pen. Such productive behavior in basic writers suggests that word-processing software on computers may be a positive tool in the basic writing classroom.
The students incorporated the use of the word-processing software on computers into their writing strategies in a variety of ways. A couple of the students wrote their first drafts by hand, then typed them on the computer. Some students developed rough outlines--more along the lines of a brainstorm than a formal outline--and typed their initial drafts directly on the computer using their brainstorms as a guide. Some students just started typing away. Whatever the process, the majority of the students consistently spent the whole class period actively working on their texts.
The instructor of the word-processing section and the instructor
of the handwriting section served as tutors/facilitators. As the
instructor in the word-processing section, I conferenced with
students as they requested my help. Sometimes, we worked on the
computers with word-processing software, especially in the early
stages of a draft. As drafts got closer to the final product (which
was to be turned in), students would often want to work on a hard
copy. No one seemed worried about doing extensive revising or
editing because students quickly realized how the word-processing
software did all the drudge work--the insertions, the deletions,
the block moves, and most of all, the boring recopying.
The first question this study sought to answer was whether the
word-processing software on computers would encourage the production
of text by basic writers. Therefore, all words were counted on
the two pre-test writing samples and the two post-test writing
samples. The results were dramatic and correlate with the findings
of Nash and Schwartz (1988). Students using word-processing software
on computers wrote a mean of 621 words more on their post-test
writing tasks than did students using pen and paper (Table 1).
The significance test indicates that the change from the totals
of the two pre-tests to the totals of the two post-tests are significantly
different for word processing and handwriting, ~(1, 21) = 22.03,
p c 0.0001.
Second, pre-test and post-test samples were subjected to holistic evaluation of writing quality. All writing samples from the pretest and writing samples from the handwriters' post-test were printed on the same dot matrix printer used by the word-processing section for their post-test writing samples. All samples were randomly mixed so that raters would not know whether they were reading a paper written before or after instruction, or by a student in the experimental or control section. The holistic evaluation was done by experienced writing teachers who also had previous experience in holistic scoring. The raters went through a training session before scoring each task, using anchor papers to establish the criteria for rating papers. Two raters read each paper and scored it on a scale of 1 to 4, 1 being the lowest score and 4 being the highest score. Where the score differed by more than one point, a third rater scored the paper and the two closest scores were used. This process produced four scores for each pre-test writing sample, and four scores for each post-test writing sample. For the ANOVA, all scores for the pre-test were combined, and all scores for the posttest were combined, resulting in a possible high score of 16 or a possible low score of 4 for both pre-test and post-test samples. Appendix B details the criteria developed by the raters for evaluating the papers.
Results of the holistic scoring are shown in Table 2.
The significance test showed no significant difference in growth
of writing quality across the semester between students using
word-processing software on computers and students using handwriting,
F(1,21) = 0.08, p < 0.05.
A study such as this one must be considered in light of what it is a pilot study with a limited population. When the study began, there were 20 students in each section. By the end of the semester, various forms of attrition had reduced the population by almost half. To make sweeping generalizations about all basic-writer populations based on this study alone would be unfair and unwise. But with that caveat in mind, it does seem possible to make a few observations.
First, in any study based on a small sample population, it is important to know if one or two members of that population skewed the statistics by virtue of extraordinary performance levels. Such was not the case in this study. Measured performances were amazingly uniform for all subjects, so statistical results noted above accurately reflect the population studied.
Second, I do think that the significantly increased production of text by the basic writers using word-processing software on computers indicates the possible advantages of using word-processing software in basic-writing classes. The word-processing software seemed to encourage the production of text to an even greater degree among these basic writers than it did among the large population of college writers I studied earlier. If this is the case, and Nash and Schwartz (1988) indicate similar findings, then I think teachers of basic writers would want to have their students using word processors or word-processing software on computers.
At this point, I do not know why the texts were longer. The increased length may involve greater development of ideas, or it may just be that students are writing more words and not really controlling the flow of words in productive ways. Nash and Schwartz (1988) found that increases in production of text could be traced to students using more evidence to support points being made in their essays. This area certainly requires more research with larger numbers of subjects.
Third, I am not terribly concerned that there were no significant differences in the development of overall writing quality between the students using word-processing software on computers and the students writing by hand. After all, a 15-week semester is a short period of time, especially for basic writers who are often struggling with their lack of writing experience. (Many of the subjects in this study had never written an essay.) Both groups made improvements in their writing, as would be expected given the pedagogical approach employed. Perhaps significant differences might turn up later if the students who used word-processing software on computers continue to do so, for the word-processing software has helped encourage productive writing behaviors, including a willingness to produce text and a willingness to spend time working with text. But questions concerning the development of writing quality will require further research, including, I suspect, longitudinal study.
However, even within the limitations of this study, there are
a number of positive implications. Would I go to the effort to
arrange for my basic-writing classes to have access to word processors
or word-processing software on computers if the opportunity presented
itself again? I would answer with an unqualified "yes."
Craig Etchison is an assistant professor of English
at Glenville State College, Glenville, West Virginia.
The two explanatory writing tasks were these:
The two persuasive tasks were these:
Four transactional writing tasks were used in this study, two
explanatory and two persuasive. The holistic raters developed
rubrics for each task based on sets of anchor papers, with a 4-paper
being the highest quality and a 1-paper being the lowest quality.
While the rubrics for each task had individual characteristics,
it is possible--in order to save space--to collapse all the rubrics
into one, giving the reader a good sense of how raters defined
quality during the scoring sessions. It should also be noted that
the criteria were arranged in descending order from characteristics
the raters considered most important to characteristics they considered
least important in affecting the quality of writing. The rubrics
that were developed are as follows:
Bridwell, L. S. (1979). Revising processes in twelfth grade students'
transactional writing (Doctoral dissertation, University of Georgia).
Dissertation Abstracts International, 40, 5765A.
Christensen, F. (1967). Notes toward a new rhetoric: Six essays
for teachers. New York: Harper & Row.
Clifford, J. (1981). Composing in stages: The effects of collaborative
pedagogy. Research in the Teaching of English, 15(1),
Etchison, C. (1985). A comparative study of the quality and
syntax of compositions by first year college students using handwriting
and word processing. (ERIC Document Reproduction Service No.
ED 282 215)
Faigley, L. (1979). Another look at sentences. Freshmen English
News, 7(3), 18-21.
Hartwell, P., & Bentley, R. H. (1982). Open to language.
New York: Oxford University Press.
Hawisher, G. E. (1986). Studies in word processing. Computers
and Composition, 4(1), 6-31.
Nash, J., & Schwartz, L. (1988). Computers and the writing
process. On-Cue, January, 3-5.
Nichols, R. G. (1986). Word processing and basic writers. Journal
of Basic Writing, 5(2), 81-97.