Session Summary: JGME Shows How to Turn Your Evaluation Work into Scholarly Activity

April 7, 2022
JGME Deputy Editor Dr. Deb Simpson
Previous
Next
JGME Deputy Editor Dr. Deb Simpson
JGME Executive Editor Nicole M. Deiorio, MD
JGME Deputy and Rip Out Editor Deborah Simpson, PhD
JGME Associate Editor Dorene F. Balmer, PhD

Program evaluation occurs regularly, not only in Annual Program Evaluations, but also as an everyday activity in graduate medical education (GME). Educators continually ask questions such as: Should we retain this component of the curriculum? Is this innovation addressing the challenges from our old approach? What evaluation information should we report to our funding bodies or accreditors? Evaluation happens whenever an assessment is made to determine something’s worth or value to educators. So how can you take that evaluation work you’re already doing and translate it into scholarly work? The Journal of Graduate Medical Education (JGME)’s session at the 2022 ACGME Annual Educational Conference, “Turning Your Educational Evaluation Work into Scholarly Activity,” focused on doing just that. 

The session had two objectives: 1) to determine the steps for program evaluation, making sure they are consistent with accepted standards; and 2) to discuss strategies for improving that process. By following these steps and suggestions from the beginning, you can ensure your data is not just believable and informative toward decisions at your institution, but also has the potential to enhance what others in the GME community do, and hence become a scholarly activity. 

Led by JGME Executive Editor Nicole M. Deiorio, MD, Deputy Editor Deborah Simpson, PhD, and Associate Editor Dorene F. Balmer, PhD, the session began with a discussion of evaluation in general, noting that we all evaluate in our daily lives to determine the value of what to buy, what to watch on television, and how to spend our time. We each have our own ways of evaluating, with our systems and standards. But when it comes to conducting program evaluations, it is important that they be systematic, informed by a model, and aligned with program evaluation standards. 

The session then shifted to focus on those standards and how they could be applied to a sample program evaluation abstract. Taken from the American Evaluation Association, the program evaluation standards are: 

  • Accuracy: Does the evaluation convey trustworthy reliable data? 
  • Feasibility: Is the evaluation realistic and cost-effective? 
  • Integrity: Is the evaluation fair and ethical? 
  • Utility: Does the evaluation serve the needs of our stakeholders?  

Here, the JGME session presenters had some fun, applying these four standards to a fictional abstract they created, “Coming in From the Cold: Evaluating the New James Bond 007 MI6 Training Program,” which was presented in the entirely made-up Journal of Secret and Double Agents. The abstract detailed a program evaluation focusing on technology and simulation changes for first-year spy trainees. 

While we were unable again this year to have attendees work together in person in small groups to decide whether the abstract met the standards, the virtual chat was quite lively as presenters and participants asked tough questions, not just about the results presented, but also about whether the MI6 spy program abstract was clear enough about why the evaluation was being done. While the abstract was a bit fanciful and invented for the purposes of the session, its methods and statistical results mirrored many of those submitted to JGME in the past, including both strengths and weaknesses often seen in those submissions. 

Finally, session leaders guided participants through this fictional abstract using a checklist they developed, breaking down each of the four standards into even more detail with specific questions. The checklist was shared in the chat and will be available soon in an upcoming Rip Out article in the journal detailing this process (see this recent Journal Notes blog post about JGME’s Rip Out series).   

The recorded session will remain available in the virtual conference platform for conference participants with Premium registration until May 1, 2022. You have until then to find out whether the MI6 spy program made the cut and to learn more about how to apply this process to your own program evaluations from the beginning, so that the hard work you are doing today can later benefit others as scholarly activity.