>
|
|
From the ERIC database
Theory Meets Practice in Language Arts Assessment. ERIC Digest.Assessment in the classroom is following contemporary descriptions of learning, thinking, and language use as "processes"--or even one inseparable "process." Accruing theory that stresses process and integration recommends and promotes instruction that 1) crosses different subject matter; 2) combines various kinds of thinking; and 3) integrates the different language behaviors (Herman et al. 1992). The theory further emphasizes that "thinking" or problem solving should be a major focus for instruction; another emphasis is a focus on performance--application of the information and strategies that students learn to situations that are real and meaningful for them. The curricula that are evolving in schools that embody these beliefs emphasize "ideas" and the reasons for understanding and expressing them. Reading and listening comprehension and effective speaking and writing are defined by such theory as nearly identical "meaning-constructing" processes (Wiggins, 1993).
THEORY INTO PRACTICE Presumably these tests have held schools and teachers accountable by measuring what many educators and the public believe is being--or should be- -emphasized in the schools. However, many of the tests have attempted to isolate and categorize both knowledge and sub-behaviors of processes like reading and writing. The testing goal is to report on "objectives" that are easily targeted for practice and which, on the test, are measured by multiple-choice questions. Application and strategy use has presumably been assessed by these tests as students attempt to choose a correct answer from several choices. In the opinion of many educators, such responses to the public's concern for accountability have not been compatible with either education as problem solving or with language use as the construction of meaning. The result of the use of short-answer or selected-answer assessments has been a narrowing of the curriculum. That this would happen is understandable. When the accountability assessments were instituted, teachers studied the tests to see what was being assessed since they, as well as the students, were being held accountable for the test results. Is it any wonder that many teachers have emphasized what the tests cover and have modeled instruction after them? Since such tests emphasize the recognition of answers and cannot determine if a student can develop his/her own response, or whether a student can refocus a problem, the instructional emphasis in many classrooms has grown narrower. This narrowing of the curriculum was exacerbated by textbook authors and publishers who were pressured to structure textbooks and instructional materials that reflect the content and skills emphasized on the tests. Such textbooks and other materials provide learning activities that mimic what the tests have asked students to do. Students have been led, by both published materials and their teachers, to practice isolated objectives and fractured skills applied to sentence-long ideas presented to them (Wiggins, 1993). How much meaning construction does such an instructional emphasis promote? What applications of knowledge and learned behavior does it foster? How well do such opportunities reflect genuine student interests, information needs, and purposes for reading and writing? One reason that current times are so interesting for educators is that the conflicting phenomena just described have created "tension." Pressed in the vise created by what has been called "the era of accountability," which emphasizes recognition and right answers, and by evolving theory which emphasizes constructing meaning and problem solving, educators have become more articulate about defending the classroom impact of the new theory. There has been an exceptionally keen interest in both process-oriented instruction and process-oriented evaluation of its effect. The concern with more valid forms of assessment has led to the search for "alternative assessments," that is for alternatives to the commonly used and highly publicized multiple-choice, standardized tests (Smith, 1991).
PORTFOLIOS AND ASSESSMENT One of the most important outcomes of the widespread interest in portfolio assessment is that it endorses the reliance on teacher and student judgment. This same regard, however, raises questions about how well portfolio assessment can serve the public's interest in how accountable schools (and their teachers) are. The public, the media, legislators, and employers have been enthusiastic about assessment that has students "apply" what they know; but many understand and trust the fact that multiple-choice tests are normed. Scores on such tests can be compared to how similar students from across the nation perform, and that makes such audiences "more assured about their students' achievements." Portfolios have evolved as individualized and personalized collections of students' achievements, but they do not solve the need for comparability and for educational accountability in the eyes of many education policy makers and the public. On the other hand, the multiple-choice tests have been criticized for emphasizing recognition over construction and for failing to emphasize problem solving. This dilemma has led to the tryout of new forms of assessments which have fallen under the heading of "performance" or "authentic" tests. Both these and portfolios are being used in different subject areas. One general form of performance assessment that has evolved emphasizes process by having a student read several texts in order to construct a response to a general problem. The purpose is defined in terms of a problem to be solved, and an audience for the writing task is assigned; but both are designed to seem authentic to the student. The criteria for scoring how students organize and develop their responses can be carefully described, and examples of student responses that match different scores can be selected for scorers to follow. This system can be tested to assure that raters who follow the criteria and refer to the example papers give the same--or nearly the same--ratings to the same papers. Thus an assessment that promotes the actual processing of problem solving and idea construction can be made reliable as well (Farr and Tone, 1994). Many state and local school districts across the country are experimenting with the kind of performance assessment just described. A few are experimenting with ways to use and evaluate portfolios for large-scale assessment as well. The intention has not been to replace or discontinue standardized multiple-choice tests, but the interest in alternative forms of assessment appears to be a desire to get at the "application" of student learning (Wiggins, 1993). In response to this trend, authors and publishers of assessment and other educational materials have begun to produce textbooks and instructional materials which cut across content areas, emphasize the construction of meaning and problem solving, and encourage collaborative learning. The new instructional materials and assessments being developed seem to be in sync with each other and with theory and common sense which emphasize the value of purpose and integration in learning. That is, they hold a view of the students as thinkers and problem solvers rather than as empty vessels to be filled with specific information carefully prescribed by a curriculum guide So now educators have a wider, richer selection of materials and ideas to match to the theories to which they subscribe. They can also read about educational theory, different instructional approaches, and educational issues and problems, which will, hopefully, reflect the increasingly collective determination of educators to have their students learn by doing- -doing something that has genuine value and relevance for them. Such choices underline the excitement of education in the 1990s.
REFERENCES Farr, Roger (1992). "Putting It All Together: Solving the Reading Assessment Puzzle." Reading Teacher, 46(1), 26-37. EJ 449 772 Farr, Roger and Bruce Tone (1994). Portfolio and Performance Assessment: Helping Students Evaluate Their Progress as Readers and Writers. Orlando, FL: Harcourt Brace and Co. ED 363 864 Herman, Joan L. et al. (1992). A Practical Guide to Alternative Assessment. Alexandria, VA: ASCD. ED 352 389 Smith, Carl B., Ed. (1991). Alternative Assessment of Performance in the Language Arts. Bloomington, IN: ERIC Clearinghouse on Reading and Communication Skills. ED 339 044 Tierney, Robert J. et al. (1991). Portfolio Assessment in the Reading- Writing Curriculum. Norwood, MA: Christopher-Gordon Publishers, Inc. ED 331 055 Wiggins, Grant (1993). "Assessment: Authenticity, Context, and Validity." Phi Delta Kappan, 75(3), 200-08, 210-14. EJ 472 587 ----- This publication was prepared with partial funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract no. RR93002011. Contractors undertaking such projects under government sponsorship are encouraged to express freely their judgment in professional and technical matters. Points of view or opinions, however, do not necessarily represent the official view of the Office of Educational Research and Improvement
Title: Theory Meets Practice in Language Arts Assessment. ERIC Digest. Descriptors: Academic Achievement; Elementary Secondary Education; * Evaluation Methods; * Language Arts; * Portfolios [Background Materials]; * Student Evaluation; * Theory Practice Relationship Identifiers: *Alternative Assessment; *Authentic Assessment; ERIC Digests; Portfolio Approach http://ericae.net/edo/ED369075.htm |
|
|||
Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at
ericae.net. |