>
|
|
From the ERIC database
The Application of Case Study Evaluations. ERIC/TM Digest.Rather than using large samples and following a rigid protocol to examine a limited number of variables, case study methods involve an in-depth, longitudinal examination of a single instance or event. It is a systematic way of looking at what is happening, collecting data, analyzing information, and reporting the results. The product is a sharpened understanding of why the instance happened as it did, and what might be important to look at more extensively in future research. Thus, case studies are especially well suited toward generating, rather than testing, hypotheses. Intended for the consumer of case studies, this digest briefly discusses six types of case studies, based on the framework provided by Datta (1990). For each, we present the type of evaluation questions that can be answered, the functions served, some design features, and some pitfalls.
TYPES OF CASE STUDIES There are pitfalls in presenting illustrative case studies. They require presentation of in-depth information on each illustration; there may not be time on-site for in-depth examination. The most serious problem is with the selection of instances. The case(s) must adequately represent the situation or program. Where significant diversity exists, it may not be possible to select a typical site. Exploratory Case Studies are condensed case studies, undertaken before implementing a large-scale investigation. Where considerable uncertainty exists about program operations, goals, and results, exploratory case studies help identify questions, select measurement constructs, and develop measures; they also serve to safeguard investment in larger studies. The greatest pitfall in the exploratory study is prematurity: the findings may seem convincing enough to be released inappropriately as conclusions. Other pitfalls include the tendency to extend the exploratory phase, and inadequate representation of diversity. Critical Instance Case Studies examine one or a few sites for one of two purposes. A very frequent application is the examination of a situation of unique interest, with little or no interest in generalizability. A second, rarer, application entails a highly generalized or universal assertion which is called into question, and we can test it by examining one instance. This method is particularly suited for answering cause-and-effect questions about the instance of concern. The most serious pitfall in this application is inadequate specification of the evaluation question. The importance of probing the underlying concerns in a request is crucial to the appropriate application of the critical instance case study. Program Implementation Case Studies help discern whether implementation is in compliance with its intent. These case studies are also useful when concern exists about implementation problems. Extensive, longitudinal reports of what has happened over time can set a context for interpreting a finding of implementation variability. In either case, generalization is wanted and the evaluation questions must be carefully negotiated with the customer. A requirement for good program implementation case studies is investment of sufficient time to obtain longitudinal data and breadth of information. Multiple sites are typically required to answer program implementation questions; this imposes demands on training and supervision needed for quality control. The demands of data management, quality control, validation procedures, and analytic model (within site, cross site, etc.) may lead to cutting too many corners to maintain quality. Program Effects Case Studies can determine the impact of programs and provide inference about reasons for success or failure. Like the program implementation case study, the evaluation questions usually require generalizability and, for a highly diverse program, it may be difficult to answer the questions adequately and retain a manageable number of sites. There are methodological solutions to this problem. One is to first conduct the case studies in sites chosen for their representativeness, then verify these findings through examination of administrative data, prior reports, or a survey. Another solution is to use other methods first. After identifying findings of specific interest, case studies could then be implemented in selected sites to maximize the usefulness of the information. Cumulative Case Studies aggregate information from several sites collected at different times. The cumulative case study can be retrospective, collecting information across studies done in the past, or prospective, structuring a series of investigations for different times in the future. Retrospective cumulation allows generalization without cost and time of conducting numerous new case studies; prospective cumulation also allows generalization without unmanageably large numbers of cases in process at any one time. The techniques for ensuring sufficient comparability and quality and for aggregating the information are what constitute the "cumulative" part of the methodology. Two features of the cumulative case study are the case survey method, used as a means of aggregating findings, and backfill techniques. The latter are helpful in retrospective cumulation as a means of obtaining information from authors that permits use otherwise insufficiently detailed case studies. Opinions vary as to the credibility of cumulative case studies for answering program implementation and effects questions. One authority notes that publication biases may favor programs that seem to work, which could lead to a misleading positive view (Berger, 1983). Others are concerned about problems in verifying the quality of the original data and analyses (Yin, 1989).
CONCLUSIONS We have presented six types of case study application, with different strengths and limitations. Evaluators considering the case study as a design for evaluation must first decide what type of evaluation question they have and then examine the ability of each type of case study to answer it. The crucial next step is in determining whether the methodological requirements of the chosen case study method can be met in the situation at hand. Case studies can generate a great deal of data that may not be easy to analyze. Details on conducting a case study, especially with regard to data collection and analysis, can be found in the references listed below.
REFERENCES Datta, Lois-ellin (1990). Case Study Evaluations. Washington, DC: U.S. General Accounting Office, Transfer paper 10.1.9. Miles, Matthew B., and Huberman, A.M. (1984). Qualitative Data Analysis: A Sourcebook of New Methods. Beverly Hills, CA: Sage. Yin, Robert K. (1989). Case Study Research: Design and Methods. Beverly Hills, CA: Sage. ----- This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education under contract number RI88062003. The opinions expressed in this report do not necessarily re flect the position or policies of OERI or the Department of Education
Title: The Application of Case Study Evaluations. ERIC/TM Digest. Descriptors: * Case Studies; Educational Assessment; * Program Evaluation; Program Implementation; Qualitative Research; * Research Methodology Identifiers: ERIC Digests http://ericae.net/edo/ED338706.htm |
|
|||
Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6
©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at
ericae.net. |