Use of standardized test scores as initial screens result in bias against underrepresented in medicine (UIM) students.
UIM students, as a group, have been shown to perform more poorly on standardized tests such as the USMLE.
In Canada, MCCQE exam results are not typically available for PGY1 match candidates, but may be available for some PGY1 applicants as well as PGY3 and PGY4 entry and clinical fellowship applicants. Though there is less data available for the MCCQE and MCCEE and potential bias, selection committees should be aware of this bias with MCAT and USMLE scores.
Problems with using the USMLE for postgraduate program selection are reviewed in articles here, here, and here.
Experiences:
What journey has the applicant traveled to get here?
Consider their family background, where they grew up (eg. urban versus rural), and what opportunities they might have had available to them. Consider, for example, the value of summers working to help fund their education versus unpaid research opportunities that might lead to publications.
Attributes:
Personal and professional characteristics such as communication skills, intellectual curiosity, leadership skills, commitment to marginalized populations and community service, language skills, etc.
Consider resilience and whether the applicant has overcome unique hardships or obstacles.
Academic achievements:
More traditional metrics of academic success, such as standardized test scores, grades, and research publications.
Their file review rubric included:
Medical school grades (0-6 points)
Extracurricular activities and leadership (0-2 points)
Research experience (0-2 points)
Letters of recommendation (0-2 points)
USMLE Steps 1 and 2 (-2-6 points)
Life experiences (0-5 points) assigned at the reviewer's discretion (eg. for military service, or being the first person in one's family to attend post-secondary education)
Recognize implicit bias in clerkship grades and reference letters.
Medical education research such as here and here show clinical clerkship grades consistently favour non-UIM students. Other studies here and here show narrative evaluations often reflect students' personal attributes rather than performance, and vary significantly by gender and UIM status.
Similar biases exist for reference letters, which are subjective and non-standardized. Gender and race biases have been found in medical education studies reviewing reference letters for residency training in, for example, ophthalmology, radiology, and urology.
Tips for avoiding bias when writing reference letters are available from the University of California San Francisco, University of Arizona, and Northwestern University. Consider putting your reference letter through this gender bias calculator.
A great analogy from Dr. Mireille Norris, geriatrician at Sunnybrook Health Sciences Centre and University of Toronto Faculty Lead of the Black and Indigenous Resident Application and Mentorship Program:
Apply the same critical appraisal skills we apply to medical publications to file review and, if you come across a less positive comment about an applicant, think critically about its validity and supporting evidence
Consider having one selection committee member assigned to all files from a certain medical school, which may help with recognition of implicit bias from reference writers or grades.
Consider anonymizing files for review by having a program administrator redact applicant names and pronouns from files.
Acknowledge that adopting a holistic review process that values experiences and attributes more than traditional selection processes may lead to a need for increased support to help residents succeed academically such as through academic coaching, and that this is OK.