>
|
|
From the ERIC database
Testing Students with Disabilities. ERIC Digest.The assessment of students with disabilities has taken on considerable importance since the passing of the Americans with Disabilities Act (ADA) of 1990, although most of the requirements for assessing students were previously justified legally based upon Section 504 of the Rehabilitation Act of 1973. Generally, the best methods for assessing students with disabilities coincide with legally defensible methods for this activity. Under ADA, a "disability is defined as (a) a physical or mental impairment that substantially limits one or more life activities, (b) a record of such an impairment, or (c) being regarded has having an impairment despite whether or not the impairment substantially limits major life activities" (Geisinger, 1994, p. 123). ADA requires that assessment of individuals with disabilities be performed with any reasonable accommodations being made. The word, "reasonable," of course, is ambiguous and differs depending upon the circumstances of the assessment. The considerations involved in assessing students with disabilities are presented below under three related activities: test selection, test administration, and test interpretation. Additional considerations are noted at the conclusion of this digest.
TEST SELECTION In some cases, where no measures of a given attribute are available with adapted administrations, a counselor might consider how easily he or she can adjust an instrument for use with test takers who have a disability. When published instruments are adapted, however, interpretations of their results should be tentative. Should there be planned test administrations or specialized administrative procedures for those with common disabilities, we also need to determine whether the testing instrument has norms available for those with various common disabilities--or in specialized cases, for the specific disability with which the counselor is concerned. Are there parallel interpretive guides for evaluating the assessment results for those with specific disabilities and for those who have taken specialized administrations of the assessment? Finally, are these specialized interpretive guides, if available, based upon empirical reliability and validation research? If positive answers to the above questions are not found, a counselor should consider whether the use of an unvalidated instrument is justified. When counselors adapt a measure themselves (e.g., reading an assessment to a test taker when normal administration calls for the test taker to read the test questions), they are essentially using an unvalidated instrument. As such, we must ask whether this instrument is likely to yield useful information over and above that which is already available from non-test sources. The answer to this question is likely to differ based upon the nature of the decision for which the assessment is being used.
TEST ADMINISTRATION Some assessments offer specialized administrations for individuals with common types of disabilities. These assessments tend to be either those oriented specifically for use with students with disabilities or those in widespread use, such as frequently used college admissions measures. Some accommodations permit continued administration in group settings; others require individual administration. For example, assessments may be available in improved type, large-type, Braille, and audiocassette versions for those with visual disabilities. "Time limits can be enforced, extended, or waived altogether. Test takers may be given extra rest pauses, a reader, an amanuensis (a recorder), a sign language interpreter, a tape recorder to register answers, convenient test taking locations and assessment times, and other accommodations as needed to meet their particular requirements" (Geisinger, 1994, p. 124). Accessibility to the assessment site also needs to be considered. Under rare circumstances, it may be necessary for a counselor to adapt a professionally developed assessment device for administration to a specific student. Such procedures should be performed only when no valid measure exists for the given assessment. If a counselor makes an adaptation, he or she must be aware that the scoring, norms, and interpretation are compromised and cannot be used validly. To the extent that the adaptation is extremely minor, of course, it may fall within normal variation of test administrations. However, any serious adaptation does jeopardize the value of using a published measure.
TEST INTERPRETATION When a counselor has performed an adaptation of an assessment or uses a locally derived adaptation, then extreme caution should rule, as far as test interpretation is concerned. The modified assessment simply is not the same measure as the original version for which norms and validation results exist. In general, results from such a measure are best interpreted by developing hypotheses as opposed to making decisions (Phillips, 1994). The goal of any interpretation of a modified assessment should be an expected result on the comparable standardized assessment. "We wish to know how the person taking an adapted form of a test would have performed if he or she could have taken the test under standardized conditions, assuming that the disabilities did not exist" (Tenopyr, Angoff, Butcher, Geisinger, & Reilly, 1993, p. 2).
ADDITIONAL ISSUES It also may be appropriate to choose and administer measures that assess compensatory skills used by persons with disabilities. It makes little sense, for example, to administer an assessment of graph-reading ability to a student with a severe visual disability. It would be more useful to determine how such students consider graphical information (e.g., via textual analysis with material written in Braille) and provide a direct assessment thereof. Those purchasing assessment instruments should carefully evaluate all measures to determine the degree to which they have been used with and adapted for students with disabilities. If one is disappointed with the robustness of a measure (Geisinger, 1994) when it is used with students with disabilities, let the publisher know. With enough input, they may become more interested in making needed changes. Relatedly, when one discovers measures, administrative modifications, or interpretive strategies that are well-suited for use with students with disabilities, share the results. Such findings are too important to keep secret.
REFERENCES Geisinger, K. F. (1994). Psychometric issues in testing students with disabilities. Applied Measurement in Education, 7, 121-140. Phillips, S. E. (1994). High-stakes testing accommodations: Validity versus disabled rights. Applied Measurement in Education, 7, 93-120. Section 504 of the Rehabilitation Act of 1973, 29 U.S.C. 91 701 et seq (1973). Tenopyr, M. L., Angoff, W. H., Butcher, J. N., Geisinger, K. F., & Reilly, R. R. (1993). Psychometric and assessment issues raised by the Americans with Disabilities Act (ADA). The Score, 15(4), 1-2, 7-15. Kurt F. Geisinger is Dean of the College of Arts and Sciences and Professor of Psychology at the State University of New York at Oswego. Janet F. Carlson is Assistant Professor of Counseling and Psychological Services at the State University of New York at Oswego. ERIC Digests are in the public domain and may be freely reproduced and disseminated. This publication was funded by the U.S. Department of Education, Office of Educational Research and Improvement, Contract No. RR93002004. Opinions expressed in this report do not necessarily reflect the positions of the U.S. Department of Education, OERI, or ERIC/CASS
Title: Testing Students with Disabilities. ERIC Digest. Descriptors: * Accessibility [for Disabled]; Assistive Devices [for Disabled]; Communication Aids [for Disabled]; * Disabilities; Elementary Secondary Education; Evaluation Methods; * Evaluation Problems; Evaluation Research; Special Education; Test Construction; * Test Interpretation; * Test Selection Identifiers: Americans with Disabilities Act 1990; ERIC Digests; *Testing Accommodations [Disabilities] http://ericae.net/edo/ED391984.htm |
|
|||
Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at
ericae.net. |