Realizing the Promise and Importance of Performance-Based Assessment

April 26, 2018 | Author: Anonymous | Category: Documents
Report this link


Description

This article was downloaded by: [UAA/APU Consortium Library] On: 15 October 2014, At: 14:41 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Teaching and Learning in Medicine: An International Journal Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/htlm20 Realizing the Promise and Importance of Performance- Based Assessment Jennifer R. Kogan a & Eric Holmboe b a Division of General Internal Medicine , Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania , Philadelphia , Pennsylvania , USA b American Board of Internal Medicine , Philadelphia , Pennsylvania , USA Published online: 18 Nov 2013. To cite this article: Jennifer R. Kogan & Eric Holmboe (2013) Realizing the Promise and Importance of Performance- Based Assessment, Teaching and Learning in Medicine: An International Journal, 25:sup1, S68-S74, DOI: 10.1080/10401334.2013.842912 To link to this article: http://dx.doi.org/10.1080/10401334.2013.842912 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions http://www.tandfonline.com/loi/htlm20 http://www.tandfonline.com/action/showCitFormats?doi=10.1080/10401334.2013.842912 http://dx.doi.org/10.1080/10401334.2013.842912 http://www.tandfonline.com/page/terms-and-conditions http://www.tandfonline.com/page/terms-and-conditions Teaching and Learning in Medicine, 25(S1), S68–S74 Copyright C© 2013, Taylor & Francis Group, LLC ISSN: 1040-1334 print / 1532-8015 online DOI: 10.1080/10401334.2013.842912 Realizing the Promise and Importance of Performance-Based Assessment Jennifer R. Kogan Division of General Internal Medicine, Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA Eric Holmboe American Board of Internal Medicine, Philadelphia, Pennsylvania, USA Work-based assessment (WBA) is the assessment of trainees and physicians across the educational continuum of day-to-day compe- tencies and practices in authentic, clinical environments. What distinguishes WBA from other assessment modalities is that it en- ables the evaluation of performance in context. In this perspective, we describe the growing importance, relevance, and evolution of WBA as it relates to competency-based medical education, super- vision, and entrustment. Although a systematic review is beyond the purview of this perspective, we highlight specific methods and needed shifts to WBA that (a) consider patient outcomes, (b) use nonphysician assessors, and (c) assess the care provided to popu- lations of patients. We briefly describe strategies for the effective implementation of WBA and identify outstanding research ques- tions related to its use. Keywords work-based assessment, competence INTRODUCTION Undergraduate and graduate medical education training pri- marily occurs in the clinical workspace where trainees partici- pate in and learn by caring for patients under graded levels of supervision. Throughout training, students, residents, and fel- lows are assessed to ensure they are achieving or have acquired the necessary competencies to enter unsupervised practice. Mul- tiple modalities exist to assess learners including assessments that test knowledge (i.e., multiple-choice exams), application of knowledge (i.e., problem-based learning sets), or ability to show skills (i.e., standardized patients and simulation).1,2 Of increas- ing importance is workplace-based assessment (WBA). Most broadly, WBA is the assessment of day-to-day (habitual) prac- tices in the authentic, clinical environment or “the assessment of what doctors actually do in practice”.3,4 What distinguishes Correspondence may be sent to Eric Holmboe, American Board of Internal Medicine, 510 Walnut Street, 17th Floor, Philadelphia, PA 19106. E-mail: [email protected] WBA from other assessment modalities is that it enables the evaluation of performance in context.2 In this perspective, we describe the growing importance, rel- evance, and evolution of WBA as it relates to competency-based medical education (CBME), supervision, and entrustment. Al- though a systematic review is beyond the purview of this per- spective, we provide examples of WBA instruments, in particu- lar highlighting needed shifts to WBA that (a) consider patient outcomes, (b) use nonphysician assessors, and (c) assess the care provided to populations of patients. We briefly describe strategies for the effective implementation of WBA. Finally, we identify outstanding research questions about WBA whose answers might help ensure that the full potential of WBA is realized. THE GROWING IMPORTANCE OF WORK-PLACE BASED ASSESSMENT In the United States, Canada, and Europe, the competencies trainees must acquire and guidelines for their assessment have been articulated.5–7 CBME has refocused assessment on mea- suring outcomes of training (e.g., specific behaviors, skills, etc.) rather than using time or removed measures as a proxy for com- petence.8,9 To determine how a trainee is actually performing, he or she has to be assessed when engaged in the complexity of day-to-day clinical work. Therefore, WBA has taken on partic- ular importance as an assessment approach situated at the top of Miller’s pyramid assessing “does,” what a trainee can actu- ally do.2 WBAs can evaluate multiple, essential competencies simultaneously in an integrated fashion in the authenticity of day-to-day practice.10 This is crucial because mastery of med- ical knowledge and ability as assessed in a controlled environ- ment are insufficient to ensure clinical competence and predict actual day-to-day performance.11,12 The adoption of CBME requires a holistic view of trainee performance using meaningful and integrated measures.9 The guided assessment of trainees’ competencies in WBA not only S68 D ow nl oa de d by [ U A A /A PU C on so rt iu m L ib ra ry ] at 1 4: 41 1 5 O ct ob er 2 01 4 ASSESSMENT IN MEDICAL EDUCATION S69 allows for observing and judging trainee competence but also can provide meaningful information about how the trainees’ abilities impact the most important stakeholder in medical education—the patient. The CBME movement was in part cat- alyzed by the public’s criticism of the current healthcare and medical education systems’ inability to meet the needs of pa- tients, populations of patients and the public health systems that serve those patients.13 Therefore, WBA is an important mech- anism to ensure that the medical education community is ac- countable for the competence of the trainees it graduates and for enhanced patient safety and quality during training.14 Going for- ward it will become increasingly important that the information derived from WBA be able to assess all elements of high-quality patient care as defined by the Institute of Medicine.15 Mandates for enhanced supervision of trainees also elevates the importance of WBA, because these assessments, when done well, can inform clinical supervisors about how they need to supervise the trainee immediately and going forward.14,16,17 Ul- timately, it is WBAs that provide the information necessary to make entrustment decisions granting graduated levels of inde- pendence to trainees to perform clinical responsibilities without direct supervision.18 We have seen yet another shift in the rationale for undertak- ing WBA. Early descriptions of WBA predominantly focused on assessment of learning as the endpoint.19 The main purpose of assessment was to determine if the trainee met summative (“pass/fail”) performance standards or successfully completed a course or study.20 Now there is a shift from “assessment of learning” to “assessment for learning” where assessment should drive or catalyze the learner.21,22 The utility elements of WBA extend beyond reliability, validity, practicability, acceptability, and cost-effectiveness and now include its catalytic effect to drive future learning and professional development.22,23 WBA should be able to assist assessors in providing trainees with meaningful feedback to enable them to reach their full poten- tial.4,20 Feedback should direct learning to the desired outcome and help trainees improve the care they provide to patients in the training setting.24,25 This is a fundamental and criti- cal shift in thinking: WBA must evolve from assessments of just the trainee to assessments that incorporate the impact of trainees’ competence on the quality of care provided to the pa- tient, in turn underpinning decisions regarding supervision and entrustment. If WBAs are to catalyze learning, assessments must be more than just numerical ratings. Rich narrative, qualitative assess- ments are increasingly being viewed as important in achieving this goal.9 Medical practice is complex, as is assessment of competencies in an integrated fashion. Given the complexity inherent in the competencies, identifying mechanisms to assess the totality of care, not just the sum of its parts is needed.9 Shift- ing from numerical ratings to qualitatively rich observations that drive a narrative can better inform the specific feedback which is essential for trainees’ ongoing learning and clinical skills growth.26 EVOLVING WORKPLACE-BASED ASSESSMENT Assessment of the single patient–physician encounter has been a mainstay of WBA for many years. Multiple assess- ment instruments have been created for direct observation of the learner–patient clinical encounter.27 Most studied is the mini- clinical evaluation exercise (mini-CEX).27 More than 20 studies have demonstrated that the mini-CEX possesses good reliabil- ity and validity properties, and learners self-report that they find the mini-CEX experience to be helpful and useful.28,29 Other observation tools targeting more specific competencies include the Direct Observation of Practical Skills, the Objective Structured Assessment of Technical Skills, and the Profession- alism mini-CEX, which possess reasonable utility for formative purposes.30–32 Unfortunately, no study to date has yet demon- strated that any of these direct observation tools and assessment processes leads to actual practice changes or improved patient care.27,33 Another WBA method involving a single trainee-patient encounter is chart-stimulated recall or the case-based discus- sion.33–36 These methods assess the trainee through questioning about and discussion of clinical decisions using the patient’s medical record as the stimulus. Early research with CSR as part of emergency medicine certification in the United States and more recently as part of the Physician Achievement Re- view (PAR) program in Canada has found CSR to be a valid and useful assessment method to assess clinical reasoning.37,38 Case- based discussion is a useful and established part of postgraduate assessment in the United Kingdom.34–36 The value of both meth- ods is that assessment of clinical reasoning is performed using care delivered to actual patients. Finally, the ability to recog- nize one’s own clinical gaps and address them to improve care using evidence-based resources is growing in importance as the amount of medical information far exceeds the capacity of the human mind to retain it. Although much work remains to be done, tools such as the American Board of Internal Medicine’s point-of-care clinical question module provides a systematic approach to documenting evidence-based practice skills tied to actual patient care encounters.39 CRITERION WORK-BASED ASSESSMENT FOCUSED ON PATIENT OUTCOMES All single encounter assessment tools suffer from poor intra- and interrater reliability, rater idiosyncrasy, and variable frame of reference for judgment and standards.27,40–43 The primary strategy to deal with these limitations has been to involve mul- tiple raters over time, improving reliability and to some extent validity. Although important, this strategy fundamentally fails to recognize the aforementioned fact a patient is also involved in the observational process and is entitled to high-quality care. Therefore, the future of direct observation will need to center the assessment process on the patient; this approach is well aligned with CBME’s focus on developmental, criterion-referenced as- sessment.25 Already, we are seeing a shift in the rating scales of D ow nl oa de d by [ U A A /A PU C on so rt iu m L ib ra ry ] at 1 4: 41 1 5 O ct ob er 2 01 4 S70 J. R. KOGAN AND E. HOLMBOE WBA forms and the anchors that are being used to ground the ratings. Traditionally, WBA scale anchors used ordinal (degree of “merit”) or comparative levels of performance.27 However, normative assessment fails to ensure that trainees meet compe- tence standards. The shift to criterion referenced assessment, in which trainees are compared to standard criteria, holds promise by grounding WBA in a trainee’s readiness for independent prac- tice or entrustment in the tasks of the profession.17,44 Early work suggests that shifting from normative scales to those grounded in trainee competence results in response scales that are better aligned to the reality map of the evaluators.44 There is optimism that criterion-referenced WBA scales will, in turn, increase as- sessor discrimination and decrease interrater variability thereby decreasing the number of assessments required to achieve good reliability.45 EXPANDING THE ASSESSOR POOL TO INCLUDE NONPHYSICIAN HEALTH PROFESSIONALS AND PATIENTS To date, the WBA evaluators, as exemplified by the assess- ment methods just described, have been primarily physicians. However, the value of WBA will be enhanced by expanding the assessor pool to include individuals who can provide ad- ditional perspectives about how the learner performs within clinical systems. Multisource feedback (MSF) is an established WBA method that can provide rich information from patients and other healthcare providers to learners and practicing physi- cians across an entire career.46 Typically, MSF instruments are psychometrically based surveys; some instruments also include a physician self-assessment, allowing physicians to compare and contrast self-ratings with those from other raters.47 As- sessments by nonphysician health professionals such as nurses, social workers, physical therapists, pharmacists, and support staff are indispensable given the increasing focus on interpro- fessional, team-based care and systems-based practice.5,6,48,49 Effective team work improves care and reduces medical errors; ensuring future physicians acquire these necessary competen- cies during training is essential.50,51 MSF is particularly valuable for assessing interpersonal skills, communication, professionalism, and interprofessional teamwork, the latter an important component of systems-based practice.46 Although the majority of MSF instruments are psy- chometrically based, qualitative or mixed approaches are begin- ning to receive renewed attention. The MSF Team Effectiveness Assessment Module developed by the American Board of In- ternal Medicine combines a survey with written comments to create a rich feedback report for hospital-based physicians on their teamwork behaviors and interprofessional communication skills.52 This instrument encourages the physician to review the results with a trusted peer, a best practice in MSF, to help guide the physician’s personal improvement plan.52 Research to date demonstrates MSF is best used for formative assessment, and although most of the developmental work has occurred among practicing physicians, MSF use is rapidly growing in graduate medical education.47,53 Use of peer and patient surveys in the maintenance of certification has led to self-reported changes in practice and improvements in care.54,55 In Canada, the MSF used in the PAR program among multiple disciplines has also led to self-reported changes in practice.46,47 Peer surveys are a core component of the Foundation program in the United King- dom.36,37 Perhaps the most important shift is the growing inclusion of the patient perspective regarding their experience and self- identified important healthcare outcomes such as functional sta- tus. The Institute of Medicine identified patient-centered care as the core competency for all healthcare providers, and patient- centered care is an essential part of the triple aim for improv- ing U.S. healthcare.15 To determine if care is patient centered, obtaining the patient perspective is essential.15,56 Increasingly, patient surveys are being used as stand-alone assessment tools to provide feedback and judge quality.57–60 Some of the best known instruments are the PAR (Canada)61 and the family of CAHPS surveys (United States)46,57,60 Patient surveys can focus on single encounters or on the experience with a practice and/or provider over time. Evidence shows such survey assessments help physicians improve communication skills and quality.54,62 Clinimetric approaches that specifically use open-ended ques- tions can provide rapid, point-of-care feedback to physicians and practices but to date have not been widely adopted.63 Mov- ing forward, WBA must include the patient’ “voice” using a suite of validated surveys and tools, both at the individual and population level. EXPANDING ASSESSMENTS TO MEASURE CARE PROVIDED TO POPULATIONS OF PATIENTS Although there is little doubt that assessment of single patient encounters using tools such as direct observation will remain vi- tally important, most physicians (both inpatient and outpatient) care for populations of patients with acute and chronic condi- tions. Therefore, we must expand our thinking about WBA from a single observation of a provider with a patient to an assessment of the care that is provided to populations (or groups) of pa- tients. Assessment methods for examining performance across groups of patients include the surveys just highlighted but also clinical care audits (also known as medical record audits) us- ing performance measures targeting process and outcomes of care.64,65 Process measures typically target whether and how a clinical process was accomplished, such as ordering key tests (e.g., hemoglobin A1c in patients with diabetes), procedures (e.g., screening mammography) and therapies (e.g., aspirin in patients with coronary artery disease), appropriately tracking coordination and follow-up of care, or not performing unneces- sary tests (e.g., imaging for low back pain). Outcome measures can be intermediate (e.g., blood pressure and glycemic control) or direct (e.g., surgical complications and mortality). Outcomes measures are harder to create and often require larger patient D ow nl oa de d by [ U A A /A PU C on so rt iu m L ib ra ry ] at 1 4: 41 1 5 O ct ob er 2 01 4 ASSESSMENT IN MEDICAL EDUCATION S71 samples to detect meaningful differences, or signals, in care. As a result, they are more challenging to measure at the in- dividual practitioner level and are often best analyzed at the practice level.66 This certainly limits their use at the individ- ual physician-in-training level, but that does not mean trainees should not participate in and use group-level outcomes perfor- mance data to learn and improve. Several studies, including a systematic review by the Cochrane Collaboration, have demonstrated that audit with feedback can lead to modest but meaningful improvements in care provided by practicing physician and physicians-in- training.67–69 Increasingly registries, such as the Society for Tho- racic Surgery and National Cardiovascular Disease registries, are being used to track performance over time and help drive improvement for practicing physicians.70,71 Use of registries in graduate medical education has been limited to date but soon will be a required competency of all physicians. In the United States, audit with feedback is a component of some residency accreditation criteria, and recently the American Board of Fam- ily Medicine added a requirement of residents to complete an evaluation of performance in practice as a part of certification.72 Procedure logs have been used for some time in the surgical dis- ciplines and represent a “quasi-registry” whereby residents and fellows track their “cases” including complications. Over time such procedural logs should morph into more robust registries or portfolios that the trainee can continually use when entering un- supervised practice. The bottom line is that assessment through performance measurement is now a vital method for evaluat- ing competence and improving quality and safety, and learners should be introduced to this WBA method as early as possible. Performance audits using validated quality and safety measures are well aligned with patient centeredness and importantly target the key competencies of practice-based learning and improve- ment and systems-based practice, two competencies essential to providing high-quality and safe patient care. Another evolving outcome WBA measurement is the use of patient-reported outcome measures (PROMs) that target mean- ingful functional outcomes for patients.73 For example, the United Kingdom is using PROMs in its quality programs to assess functional status after herniorraphy, total knee and hip arthroplasty, and varicose vein surgery.74 Although preventing venous thrombosis and infection after arthroplasty is important, ultimately the long-term outcome of interest is whether the pa- tient has better mobility and function. Many tools are already available for training programs to use to assess how their patients are doing functionally.73,74 The major challenge with PROMs is that most instruments were developed for research purposes, so further work is needed to develop instruments for routine clinical use. Despite this, PROMs will increasingly need to find their way into graduate medical education. As WBA evolves to focus on populations of patients using quality and patient safety as the primary frame of reference, approaches to aggregate and judge overall performance will be needed. Portfolios are one approach that continues to show promise as a mechanism for trainees to use multiple WBAs for reflection and improvement.75 WBAs are essential to inform competency committee deliberations in milestone and EPA- based assessment systems currently being implemented in both the United States and Canada, and portfolios are a useful mech- anism to organize and analyze WBAs. EFFECTIVE IMPLEMENTATION OF WBA The WBA literature has, to a large degree, been focused on developing, assessing, and refining WBA tools with the hopes of creating the “ideal tool.” This has led to an overemphasis of the tools rather than the users of the tools. What has become apparent is that we have plenty of WBA tools from which to choose.4,27,46,67 To reach the full promise of WBA, the rater, not the instrument, is what requires the most refining.76,77 This is in contrast to assessment tools lower on Miller’s pyramid in which revising the wording of test questions, responses, and check- lists can improve the reliability and validity of the assessment. Improving the quality of WBA is not so much about “fixing the form” but rather training the rater. Human judgment is cen- tral in WBA, so the quality and expertise of the person who is making the judgment is decisive for the quality of the as- sessment.26 Effective implementation of any WBA method will require development of the assessor, both in making observa- tions, synthesizing observations into a judgment, and providing effective feedback to the learner. In addition to rater training, there needs to be thoughtful approaches to implementing WBA into the training program. Clinical skills must be sampled across multiple contexts be- cause clinical skills are context specific. It is also necessary to sample across assessors to overcome subjectivity of assess- ments. Together, sampling across contexts and assessors per- mits more generalizable inferences that can predict future per- formance.10 Given the availability of multiple WBA methods, different WBAs can be used to assess the same competency (triangulation) and assessment of different competencies in this way can constitute programs of assessment.78,79 RECOMMENDATIONS FOR A FUTURE RESEARCH AGENDA Despite tremendous strides in advancing the quality and ef- fectiveness of WBA, there is still a need to realize its full potential. Future research needs to clarify best rater training practices that dually improve the quality of assessments and feedback to trainees to catalyze future learning and improve patient care.76,77 Research studies also need to identify strate- gies for increasing trainee receptiveness to WBA feedback.26 We need research that will identify best practices for eliciting rich narratives and strategies for collating and interpreting them rigorously to ensure good judgments are made about trainees.80 To date, outcomes-based research on WBA has largely focused on feasibility, learner or assessor satisfaction, or self-reported changes in knowledge, skills or attitudes.33 Research study D ow nl oa de d by [ U A A /A PU C on so rt iu m L ib ra ry ] at 1 4: 41 1 5 O ct ob er 2 01 4 S72 J. R. KOGAN AND E. HOLMBOE designs that can demonstrate conclusive links between WBA, the resultant feedback, learner improvement, and improvement in care delivery and patient outcomes are needed.33,81 We need more research on newer WBA approaches that use criterion- based assessments focused on patient outcomes, use nonphysi- cian health professionals and patients as assessors, and focused on patient populations (i.e., audits, PROMS). Finally, it is nec- essary to intensify research focused on assessment programs, demonstrating how WBA instruments can be used together. CONCLUSIONS Medical education is at a promising crossroads in which there are opportunities to better utilize WBA to align the assessment of trainee competency with those outcomes that will meet the needs of both individual patients and populations of patients. WBA can provide information about what trainees are able to do further advancing assessments from a time-based or norma- tive paradigm to one that is competency and criterion based. In addition, there is promise of using WBA to enhance the quality of patient care by better informing decisions about supervision and entrustment. It is now time to expand our conceptualiza- tion of WBA beyond the assessment by a physician of a sin- gle patient–provider interaction to assessments that more fully represent the additional competencies now required of physi- cians such as patient-centered care, population care, teamwork, practice-based learning and improvement, and systems-based practice. To reach this full potential, time and resources are needed to identify best practices for tool implementation, rather than tool development, and best approaches for training asses- sors to use these tools. Only then will we reach the full promise of what these assessments can provide, both for our trainees, and the patients they care for. REFERENCES 1. Epstein RM. Assessment in medical education. New England Journal of Medicine 2007;356:387–96. 2. Miller GE. The assessment of clinical skills/competence/performance. Aca- demic Medicine 1990;65:S63–7. 3. Swanwick T, Chana N. Workplace-based assessment. British Journal of Hospital Medicine (London) 2009;70:290–3. 4. Van der vleuten C, Verhoeven B. In-training assessment developments in postgraduate education in Europe. ANZ Journal of Surgery 2013. doi::10.1111/ans.12190 5. Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General compe- tencies and accreditation in graduate medical education. Health Affairs 2002;21:103–11. 6. CanMEDS. http://www.royalcollege.ca/portal/page/portal/rc/canmeds 7. General Medical Council’s Good Medical Practice. http://www.gmc- uk.org/guidance/good medical practice.asp 8. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shift- ing paradigms: From Flexner to competencies. Academic Medicine 2002;77:361–7. 9. Carraccio CL, Englander R. From Flexner to competencies: Reflections on a decade and the journey ahead. Academic Medicine 2013;88. 10. Govaerts MJB, van der Vleuten CPM, Schuwirth LWT, Muijtjens AMM. Broadening perspectives on clinical performance assessment: Rethinking the nature of in-training assessment. Advances in Health Sciences Educa- tion: Theory and Practice 2007;12:239–60. 11. Rethans J, Sturmans F, Drop R, van der Vleuten C, Hobus P. Does com- petence of general practitioners predict their performance? Comparison between examination setting and actual practice. British Medical Journal 1991;303:1377–80. 12. Kopelow M, Schnabl G, Hassard T, Klass D, Beazley G, Hechter F, et al. Assessing practicing physicians in two settings using standardized patients. Academic Medicine 1992;67(Suppl 10):S19–21. 13. Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, et al. Health pro- fessionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet 2010;376:1923–58. 14. Institute of Medicine. Resident duty hours: Enhancing sleep, supervision, and safety. December 2008. http://www.iom.edu/Reports/2008/Resident- Duty-Hours-Enhancing-Sleep-Supervision-and-Safety.aspx. Accessed July 8, 2013. 15. Institute of Medicine. Crossing the quality chasm: A new health system for the 21st century. March 2001. http://www.iom.edu/∼/media/Files/Report% 20Files/2001/Crossing-the-Quality-Chasm/Quality%20Chasm%202001% 20%20report%20brief.pdf. Accessed July 8, 2013. 16. Kennedy TJ, Lingard L, Baker GR, Kitchen L, Regehr G. Clinical oversight: Conceptualizing the relationship between supervision and safety. Journal of General Internal Medicine 2007;22:1080–5. 17. Ten cate O, Snell L, Carraccio C. Medical competence: The interplay be- tween individual ability and the healthcare environment. Medical Teacher 2010;32:669–75. 18. Ten Cate O. Trust, competence, and the supervisor’s role in postgraduate training. British Medical Journal 2006;333:748–51. 19. Kroboth FJ, Kapoor W, Brown FH, Karpf M, Levey GS. A compara- tive trial of the clinical evaluation exercise. Archives of Internal Medicine 1985;145:1121–3. 20. Schuwirth LW, van der vleuten CPM. General overview of the theories used in assessment: AMEE Guide No. 57. Medical Teacher 2011;33:783–97. 21. Martinez ME, Lipson JL. Assessment for learning. Education Leader 1989;47:73–5. 22. Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: Consensus statement and recommen- dations from the Ottawa 2010 Conference. Medical Teacher 2011;33: 206–14. 23. Van der vleuten CPM. The assessment of professional competence: Devel- opments, research, and practical implications. Advances in Health Sciences Education 1996;1:41–67. 24. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No 31. Medical Teacher 2007;29:855–71. 25. Kogan JR, Conforti LN, Iobst W, Holmboe E. Rater cognition as a patient care problem for medical education. Academic Medicine 2013; In press. 26. Schuwirth LW, van der vleuten CPM. Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher 2011;33:478–85. 27. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assess- ment of clinical skills of medical trainees: A systematic review. Journal of the American Medical Association 2009;302:1316–26. 28. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini-CEX (clinical evaluation exercise): a preliminary investigation. Annals of Intern Medicine 1995;123:795–99. 29. Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: A method for assessing clinical skills. Annals of Internal Medicine 2003;138: 476–81. 30. Wilkinson JR, Crossley JG, Wragg A, Mills P, Cowan G, Wade W. Imple- menting workplace-based assessment across the medical specialties in the United Kingdom. Medical Education 2008;42:364–73. 31. Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative “bench station” examination. American Journal of Surgery 1997;173:226–30. D ow nl oa de d by [ U A A /A PU C on so rt iu m L ib ra ry ] at 1 4: 41 1 5 O ct ob er 2 01 4 ASSESSMENT IN MEDICAL EDUCATION S73 32. Cruess R, McIlroy JH, Cruess S, Ginsburg S, Steinert Y. The profes- sionalism mini-evaluation exercise: A preliminary investigation. Academic Medicine 2006;81(10 Suppl):S74–8. 33. Miller A, Archer J. Impact of workplace based assessment on doctors’ education and performance: a systematic review. British Medical Journal 2010;341:c5064. 34. Jennett PA, Scott SM, Atkinson MA. Patient charts and office management decisions: Chart audit and chart stimulated recall. Journal of Continuing Education in the Health Professions 1995;15:31–9. 35. Carr S. The Foundation Programme assessment tools: An opportunity to en- hance feedback to trainees? Postgraduate Medical Journal 2006;82:576–9. 36. Davies H, Archer J, Southgate L, Norcini J. Initial evaluation of the of the Foundation assessment programme. Medical Education 2009;43:73–81. 37. Benson S, Munger BS, Krome RL, Maatsch JC, Podgorny G. The certifica- tion examination in emergency medicine: An update. Annals of Emergency Medicine 1982;11:91–6. 38. Jennett P, Affleck L. Chart audit and chart stimulated recall as methods of needs assessment in continuing professional health education. Journal of Continuing Education in the Health Professions 1998;18:163–71. 39. Green ML, Reddy SG, Holmboe ES. Teaching and evaluating point of care learning with an internet-based clinical question portfolio. Journal of Continuing Education in the Health Professions 2009;29:209–19. 40. Pelgrim EAM, Kramer AWM, Mokkink HGA, van den Elsen L, Grol RPTM, van der Vleuten CPM. In-training assessment using direct observa- tion of single-patient encounters: A literature review. Advances in Health Sciences Education: Theory and Practice 2011;16:131–42. 41. Norcini JJ. Current perspectives in assessment: The assessment of perfor- mance at work. Medical Education. 2005;39:880–9. 42. Gingerich A, Regehr G, Eva KW. Rater-based assessments as social judgments: Rethinking the etiology of rater errors. Academic Medicine 2011;86(10 Suppl):S1–7. 43. Yeates P, O’Neill P, Mann K, Eva K. Seeing the same thing differently: Mechanisms that contribute to assessor differences in directly-observed performance assessments. Advances in Health Sciences Education: Theory and Practice 2013;18:325–41. 44. Crossley J, Jolly B. Making sense of work-based assessment: Ask the right questions, in the right way, about the right things, of the right people. Medical Education 2012;46:28–37. 45. Redfern S, Norman I, Calman L, Watson R, Murrells T. Assessing compe- tence to practise in nursing: a review of the literature. Research Papers in Education 2002;17:51–77. 46. Lockyer J. Multi-source feedback: Can it meet criteria for good assessment? Journal of Continuing Education in the Health Professions 2013;33:89–98. 47. Lockyer J, Clyman S. Multisource Feedback (360 degree evaluation). In ES Holmboe, RE Hawkins (Eds.), Practical guide to the evaluation of clinical competence. Philadelphia, PA: Mosby-Elsevier, 2008. 48. Baker DP, Salas E, King H, Battles J, Barach P. The role of teamwork in the professional education of physicians: Current status and assessment recommendations. Joint Commission Journal on Quality and Patient Safety 2005;31:185–202. 49. Interprofessional Education Collaborative. Core competencies for inter- professional collaborative practice. Available at: https://www.aamc.org/ download/186750/data/. Accessed July 30, 2013. 50. Baker, DP, Day R, Salas E. Teamwork as an essential component of high- reliability organizations. Health Services Research 2006;41:1576–98. 51. Agency for Healthcare Research and Quality. TeamSTEPPS. Avail- able at: http://www.ahrq.gov/professionals/education/curriculum-tools/ teamstepps/. Accessed July 30, 2013. 52. Chesluk B, Bernabeo E, Hess B, Lynn L, Reddy S, Holmboe E. A new assessment tool is designed to give hospitalists feedback to improve in- terprofessional teamwork and produce better patient care. Health Affairs 2012;31:2485–92. 53. Wood, L, Hassell, Whitehouse A, Bullock A, Wall D. A literature re- view of multi-source feedback systems within and without health services, leading to 10 tips for their successful design. Medical Teacher 2006;28: e185–91. 54. Hess BJ, Lynn LA, Holmboe ES, Lipner RS. Toward better care coordina- tion through improved communication with referring physicians. Academic Medicine 2009;84(Suppl 10):S109–12. 55. Lipner RS, Blank LL, Leas BF, Fortna GS. The value of patient and peer ratings in recertification. Academic Medicine 2002;77(10 Suppl):S64–6. 56. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient centered care. Health Affairs 2010;29:1310–18. 57. Agency for Healthcare Research and Quality. Consumer Assess- ment of Healthcare Providers and Systems (CAHPS). Available at: http://cahps.ahrq.gov/.Accessed July 30, 2013. 58. Solomon LS, Hays RD, Zaslavsky AM, Alan M, Ding L, Cleary PD. Psy- chometric properties of a group-level Consumer Assessment of Health Plans Study (CAHPS) instrument. Medical Care 2005;43:53–60. 59. Hays RD, Chong K, Brown J, Spitzer KL, Horne K. Patient reports and rat- ings of individual physicians: An evaluation of the DoctorGuide and Con- sumer Assessment of Health Plans Study provider-level surveys. American Journal of Medical Quality 2003;18:190–6. 60. Wright C, Richards SH, Hill JJ, Roberts MJ, Norman GR, Greco M, et al. Multisource feedback in medical regulation: The example of the UK GMC Patient and Colleague Questionnaires. Academic Medicine 2012;87:1668–78. 61. Physician Achievement Review. Available at: http://www.par-program. org/information/. Accessed July 30, 2013. 62. Gray B, Weng W, Holmboe ES. An assessment of patient based and practice infrastructure based measures of the patient centered medical home. Do we need to ask the patient? Health Services Research 2012;47(1 Pt 1):4–21. 63. Concato J, Feinstein AR. Asking patients what they like: Overlooked at- tributes of patient satisfaction with primary care. American Journal of Medicine 1997;102:399–406. 64. Agency for Healthcare Research and Quality. National measures clearing- house. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed July 30, 2013. 65. American Medical Association. Physician Consortium for Performance Improvement. Available at http://www.ama-assn.org/ama/pub/physician- resources/physician-consortium-performance-improvement.page. Ac- cessed July 30, 2013. 66. Berenson RA, Pronovost PJ, Krumholz HM. Achieving the potential of health care performance measures. Timely analysis of immediate health policy issues. Available at: http://www.rwjf.org/en/research-publications/ find-rwjf-research/2013/05/achieving-the-potential-of-health-care-perfor mance-measures.html. Accessed July 30, 2013. 67. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: Effects on professional practice and healthcare outcomes. Cochrane Database of Systematic Reviews 2012;Issue 6. Art. No.: CD000259. doi:10.1002/14651858.CD000259.pub3 68. Willett LL, Heudebert GR, Palonen KP, Massie FS, Kiefe CI, Allison JJ, et al. The importance of measuring competency-based outcomes: Stan- dard evaluation measures are not surrogates for clinical performance of internal medicine residents. Teaching and Learning in Medicine 2009;21: 87–93. 69. olmboe ES, Hess BJ, Conforti LN, Lynn LA. Comparative trial of a web- based tool to improve the quality of care provided to older adults in resi- dency clinics: Modest success and a tough road ahead. Academic Medicine 201;87:627–34. 70. The Society for Thoracic Surgery. STS national database. Available at: http://www.sts.org/national-database. Accessed July 30, 2013. 71. American College of Cardiology. National cardiovascular data registry. Available at: https://www.ncdr.com/webncdr/. Accessed July 30, 2013. 72. American Board of Family Medicine. Initial certification/residency. Avail- able at: https://www.theabfm.org/cert/index.aspx. Accessed July 30, 2013. 73. Fitzpatrick R, Bowling A, Gibbons E, Haywood K, Jenkinson C, Mackin- tosh A, et al. A structured review of patient-reported measures in relation D ow nl oa de d by [ U A A /A PU C on so rt iu m L ib ra ry ] at 1 4: 41 1 5 O ct ob er 2 01 4 S74 J. R. KOGAN AND E. HOLMBOE to selected chronic conditions, perceptions of quality of care and career impact. Available at: http://phi.uhce.ox.ac.uk/. Accessed July 30, 2013. 74. National Health Service (UK). Health and Social Care Information Cen- ter. Patient reported outcome measures. Available at: http://www.hscic.gov. uk/proms. Accessed July 30, 2013. 75. Buckley S, Coleman J, Davison I, Khan KS, Zamora J, Malick S, et al. The educational effects of portfolios on undergraduate student learning; a Best Evidence Medical Education (BEME) systematic review. BEME Guide 11. Medical Teacher 2009;31:282–98. 76. Holmboe ES, Ward DS, Reznick RK Katsufrakis PJ, Leslie KM, Patel VL, et al. Faculty development in assessment: The missing link in competency- based medical education. Academic Medicine 2011;86:460–7. 77. Kogan JR, Conforti L, Berbaneo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: A conceptual model. Medical Education 2011;45:1048–60. 78. Dijkstra J, Galbraith R, Hodges BD, McAvoy PA, McCrorie P, South- gate LJ, et al. Expert validation of fit-for-purpose guidelines for design- ing programmes of assessment. BMC Medical Education 2012;12:20. doi:10.1186/1472-6920-12-20 79. Dijkstra J, van der vleuten CMP, Schuwirth LWT. A new framework for designing programmes of assessment. Advances in Health Sciences Edu- cation: Theory and Practice 2009;15:379–93. 80. Van der Vleuten CPM, Schuwirth LWT, Scheele F, Driessen EW, Hodges B. The assessment of professional competence: Building blocks for theory development. Best Practice & Research Clinical Obstetrics and Gynaecol- ogy 2010;24:703–19. 81. Saedon H, Salleh S, Balakrishnan A, Imray CHE, Saedon M. The role of feedback in improving the effectiveness of workplace based assessments: A systematic review. BMC Medical Education 2012;12:25. doi:10.1186/1472- 6920-12–25 D ow nl oa de d by [ U A A /A PU C on so rt iu m L ib ra ry ] at 1 4: 41 1 5 O ct ob er 2 01 4


Comments

Copyright © 2024 UPDOCS Inc.