2012; 34: e288–e299 WEB PAPER AMEE GUIDE Program evaluation models and related

2012; 34: e288–e299 WEB PAPER AMEE GUIDE Program evaluation models and related theories: AMEE Guide No. 67 ANN W. FRYE1 & PAUL A. HEMMER2 1Office of Educational Development, University of Texas Medical Branch, 301 University Boulevard, Galveston, Texas 77555-0408, USA, 2Department of Medicine, Uniformed Services, University of the Health Sciences, F. Edward Hebert School of Medicine, Bethesda, MD, USA Abstract This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model’s theoretical basis against their program’s complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick’s four-level model, the Logic Model, and the CIPP (Context/Input/ Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes—intended and unintended—associated with their programs. Introduction Program evaluation is an essential responsibility for anyone overseeing a medical education program. A ‘‘program’’ may be as small as an individual class session, a course, or a clerkship rotation in medical school or it may be as large as the whole of an educational program. The ‘‘program’’ might be situated in a medical school, during postgraduate training, or throughout continuing professional development. All such programs deserve a strong evaluation plan. Several detailed and well written articles, guides, and textbooks about educational program evaluation provide overviews and focus on the ‘‘how to’’ of program evaluation (Woodward 2002; Goldie 2006; Musick 2006; Durning et al. 2007; Frechtling 2007; Stufflebeam & Shinkfield 2007; Hawkins & Holmboe 2008; Cook 2010; Durning & Hemmer 2010; Patton 2011). Medical educators should be familiar with these and have some of them available as resources. This Guide will be most helpful for medical educators who wish to familiarize themselves with the theoretical bases for common program evaluation approaches so that they can make informed evaluation choices. Educators engaged in program development or examining an existing educational program will find that understanding theoretical principles related to common evaluation models will help them be more creative and effective evaluators. Similar gains will apply when an education manager engages an external evaluator or is helping to evaluate someone else’s program. Our hope is that this Guide’s focus on several key educational evaluation models Practice points . Educational programs are fundamentally about change; program evaluation should be designed to determine whether change has occurred. . Change can be intended or unintended; program evaluation should examine for both. . Program evaluation studies have been strongly influ- enced by reductionist theory, which attempts to isolate individual program components to determine associa- tions with outcomes. . Educational programs are complex, with multiple inter- actions among participants and the environment, such that system theory or complexity theory may be better suited to informing program evaluation. . The association between program elements and out- comes may be non-linear—small changes in program elements may lead to large changes in outcomes and vice-versa. . Alwayskeepanopenmind—ifyoubelieveyoucanpredict the outcome of an educational program, you may be limiting yourself to an incomplete view of your program. . Choose a program evaluation model that allows you to examine for change in your program and one that embraces the complexity of the educational process. Correspondence: Ann W. Frye, Office of Educational Development, University of Texas Medical Branch, 301 University Boulevard, Galveston, Texas 77555-0408, USA. Tel: 409-772-2791; fax: 409-772-6339; email: awfrye@utmb.edu 288 ISSN 0142–159X print/ISSN 1466–187X online/12/050288–12  2012 Informa UK Ltd. DOI: 10.3109/0142159X.2012.668637 Med Teach Downloaded from informahealthcare.com by Prof. Eliana Amaral on 11/24/12 For personal use only. in the context of their related theories will enrich all educators’ work. A focus on change We believe that educational programs are fundamentally about change. Most persons participating in educational programs— including learners, teachers, administrators, other health pro- fessionals, and a variety of internal and external stake- holders—do so because they are interested in change. While a program’s focus on change is perhaps most evident for learners, everyone else involved with that program also participates in change. Therefore, effective program evaluation should focus, at least in part, on change: Is change occurring? What is the nature of the change? Is the change deemed ‘‘successful’’? This focus directs that program evaluation should look for both intended and unintended changes associated with the program. An educational program itself is rarely static, so an evaluation plan must be designed to feed information back to guide the program’s continuing development. In that way, the program evaluation becomes an integral part of the educational change process. In the past, educational program evaluation practices often assumed a simple linear (cause-effect) perspective when assessing program elements and outcomes. More recent evaluation scholarship describes educational programs as complex systems with nonlinear relationships between their elements and program-related changes. Program evaluation practices now being advocated account for that complexity. We hope that this Guide will help readers: (1) become aware of how best to study the complex change processes inherent in any educational program, and (2) understand how appreciat- ing program complexity and focusing on change-related outcomes in their evaluation processes will strengthen their work. In this Guide, we first briefly define program evaluation, discuss reasons for conducting educational program evalua- tion, and outline some theoretical bases for evaluation models. We then focus on several commonly used program evaluation models in the context of those theoretical bases. In doing so, we describe each selected model, provide sample evaluation questions typically associated with the model, and then discuss what that model can and cannot do for those who use it. We recommend that educators first identify the theories they find most relevant to their situation and, with that in mind, then choose the evaluation model that best fits their needs. They can then establish the evaluation questions appropriate for evaluating the educational program and choose the data- collection processes that fit their questions. Program evaluation defined At the most fundamental level, evaluation involves making a value judgment about information that one has available (Cook 2010; Durning & Hemmer 2010). Thus educational program evaluation uses information to make a decision about the value or worth of an educational program (Cook 2010). More formally defined, the process of educational program evaluation is the ‘‘systematic collection and analysis of information related to the design, implementation, and outcomes of a program, for the purpose of monitoring and improving the quality and effectiveness of the program.’’ (ACGME 2010a). As is clear in this definition, program evaluation is about understanding the program through a routine, systematic, deliberate gathering of information to uncover and/or identify what contributes to the ‘‘success’’ of the program and what actions need to be taken in order to address the findings of the evaluation process (Durning & Hemmer 2010). In other words, program evaluation tries to identify the sources of variation in program outcomes both from within and outside the program, while determining whether these sources of variation or even the outcome itself are desirable or undesirable. The model used to define the evaluation process shapes that work. Information necessary for program evaluation is typically gathered through measurement processes. Choices of specific measurement tools, strategies, or assessments for program evaluation processes are guided by many factors, including the specific evaluation questions that define the desired under- standing of the program’s success or shortcomings. In this Guide, we define ‘‘assessments’’ as measurements (assess- ment ¼ assay) or the strategies chosen to gather information needed to make a judgment. In many medical education programs data from trainee assessments are important to the program evaluation process. There are, however, many more assessments (measurements) that may be necessary for the evaluation process, and they may come from a variety of sources in addition to trainee performance data. Evaluation, as noted earlier, is about reviewing, analyzing, and judging the importance or value of the information gathered by all these assessments. Reasons for program evaluation Educators often have both internal and external reasons for evaluating their programs. Primary external reasons are often found in requirements of medical education accreditation organizations (ACGME 2010b; LCME 2010), funding sources that support educational innovation, and other groups or persons to whom educators are accountable. A strong program evaluation process supports accountability while allowing educators to gain useful knowledge about their program and sustain ongoing program development. (Goldie 2006). Evaluation models have not always supported such a range of needs. For many years evaluation experts focused on simply measuring program outcomes (Patton 2011). Many time- honored evaluation models remain available for that limited but important purpose. Newer evaluation models support learning about the dynamic processes within the programs, allowing an additional focus on program improvement (Stufflebeam & Shinkfield 2007; Patton 2011). uploads/Geographie/ program-evaluation-models-and-related-theories-amee-guide-no-67.pdf

  • 33
  • 0
  • 0
Afficher les détails des licences
Licence et utilisation
Gratuit pour un usage personnel Attribution requise
Partager