Dr. Melissa Langhan and colleagues at Yale University School of Medicine looked at how unconscious bias may influence reviewers’ evaluation of applicants to their program. Their poster, Improving Trainee Applicant Evaluations to Reduce Unconscious Bias, was presented at the 2021 ACGME Annual Educational Conference, held virtually in late February. We asked Dr. Langhan to tell us a little more about the project, its findings, and future plans.
Primary Author: Melissa Langhan, MD, MHS
Co-Authors: Michael Goldman, MD; Gunjan Tiyyagura, MD, MHS
ACGME: Tell us about your academic and professional role.
Langhan: I am an Associate Professor of Pediatrics and Emergency Medicine at Yale University School of Medicine. I also serve as the program director for the Pediatric Emergency Medicine Fellowship.
ACGME: Can you briefly describe your project for us?
Langhan: Our goal was to create a more objective evaluation tool for our faculty members to use when assessing applicants to our fellowship program. After revising the categories that were being used for assessment, a scale with descriptive anchors was created. These anchors described achievements or behaviors that should be met to achieve each score.
ACGME: What inspired you to do this project?
Langhan: When using a simple Likert or numeric rating scale, unconscious biases may lead to ratings that are not reflective of an applicant’s actual abilities. I noticed that our former tool, which was based on a simple 0-9 scale, was prone to bias. This included both leniency bias and restriction of range, where most faculty scored applicants in the 7-9 range. Given these effects, if a rater scored an applicant in the lower, unused range of the scale this would significantly alter their average score and their subsequent ranking.
ACGME: What did you discover?
Langhan: By creating a scale with descriptive anchors, we were able to normalize the distribution of faculty ratings and reduce bias.
ACGME: What was the main takeaway?
Langhan: The use of descriptive anchors in an evaluation tool is one way to reduce bias when assessing applicants.
ACGME: Who could benefit from this?
Langhan: This information could benefit any program or institution that evaluates applicants for entry into their school or training program, particularly if they are currently using a simple numeric or Likert-based assessment tool.
ACGME: Any additional follow-up plans?
Langhan: Yes, we continue to adapt the language in our assessment tool to improve our inter-rater reliability. We are also changing the way in which we perform interviews to reduce unconscious biases, which has shown similar success.