892days since
Next meeting

Discussion Space

posted Mar 4, 2011, 7:49 AM by Anna Oswald   [ updated Jun 29, 2011, 2:12 PM ]


Dale Storie's Notes from the Education Research Archive (ERA) Presentation Feb 23, 2011

posted Feb 28, 2011, 9:24 AM by Anna Oswald

ERA: Education and Research Archive

ERA: Education & Research Archive -
http://era.library.ualberta.ca
ERA is the University of Alberta's institutional repository. ERA is an ideal place to archive journal articles(where self-archiving is allowed), research posters, learning objects, and research data. Many journal policies may permit archiving in ERA but do not allow archiving in a disciplinary repository like PubMed Central. ERA also offers a mediated deposit service to help you archive your publications.
 
ERA Information Guide
- http://guides.library.ualberta.ca/era
This guide gives an overview of ERA, including its mediated deposit service that will help you with archiving and copyright concerns.
 
PubMed Central (PMC)
- http://www.ncbi.nlm.nih.gov/pmc/
The open access repository in which authors must archive their NIH-funded research articles.
 
CIHR Policy on Access to Research Outputs -
http://www.cihr-irsc.gc.ca/e/34846.html
The full CIHR policy. Sections 5.1.1 indicates the requirements for peer-reviewed journals, and section 5.1.2 indicates requirements for publication-related research data.
 
PubMed Central Canada -
http://pubmedcentralcanada.ca/
PMC Canada is a new project from CIHR and National Research Council's Canada Institute for Scientific and Technical Information (NRC-CISTI). A manuscript submission process was introduced in late April 2010.
 
Sherpa/RoMEO & Sherpa/JULIET
http://www.sherpa.ac.uk/romeo/
Sherpa/RoMEO is a searchable database of publishers' copyright & self-archiving policies. Sherpa/JULIET is a searchable database of funding agencies' open access mandates. Because open access is changing quickly, neither database is always 100% up-to-date, so make sure you investigate the policies of both publishers and funding agencies before publishing.
 
Author Addendum from the Canadian Association of Research Libraries
http://www.carl-abrc.ca/projects/author/author-e.html
Complete this form and include it in your publication agreement to ensure that you have permission to self-archive your work.
 
For more information, contact the ERA Help Desk
:
era-helpdesk@library.ualberta.ca

Dr. Nishan Sharma's Discussion Notes from Oct 27, 2010

posted Nov 15, 2010, 2:31 PM by Anna Oswald

Helping Students Become the Medical Teachers of the Future

  • interesting because of the formal recognition of teaching role, and how it could be a natural extension to the Teacher Training workshops we do in residency
  • two schools in the UK
  • learning about teaching is specifically set out by GMC in Tomorrow’s Doctors as a mandate for medical schools
  • researchers recognized challenges as:
    • finding time in curriculum
    • managing logistics of large classes
    • content – balance v theory
    • whether students would be interested
  • designed 2-day programme
    • lectures
    • group work
    • micro-teaching sessions
    • nearly 100% of students said they had gained confidence in teaching
  • specifics
    • 40 students at a time (roughly one of our rotation sizes)
    • included theory “that we considered informed the current shape and content of the medical curriculum at [their] medical school” – e.g. reflective practice, giving feedback, teaching clinical skills, evaluation of teaching
      • also basics of behaviourism, cognitivism, constructivisim, social learning
    • overall trying to focus on teaching requirements required in residency
    • micro-teaching, 5-8 students, 10 minutes each, any topic (to increase engagement)
    • presentation skills – workshop “to help students feel confident, engaged, inspiring in the workplace” – practical exam situations, professional interactions and public speaking
  • survey
    • most useful: introduction to theory and link between theory and practice, gaining skills in planning and delivering teaching (lesson planning, micro-teaching), benefits of doing micro-teaching, practice at giving and taking feedback, doing it all in safe environment
    • changes: more teaching practice and less theory
    • timing should be after finals, when they could focus and just before residency
  • limitations
    • don’t know if it has actually made a change
  • challenge
    • content (theory v practice)

Does Simulator-Based clinical performance correlate with actual hospital behaviour

  • picked this because it incorporated two hot topics – simulation and patient safety in the context of the work-hours debate
  • the authors suggest that there is little empirical evidence that correlates simulator-based performance with real-world behaviour – I don’t’ know if that is true, and surprised if it is, but whatever
  • Harvard Work Hours, Health and Safety Group
  • study to explore:
    • whether physician performance, prospectively assessed in a simulator-based environment, was affected by work schedule
    • whether any measured impact would correlate with real-world performance under “identical conditions” – meaning work schedules
    • so they were looking at validity of simulation as a an evaluation tool
  • working of findings from Harvard Intern Sleep and Patient Safety Study which documented more sleep, fewer attention failures, fewer serious medical errors when 24 to 30 hour extended on-call shifts were abolished for interns in and ICU setting
  • tested interns in the simulator while rested and after overnight duty
  • Hypothesis: performance in the simulated environment would mirror performance we observed in the original hospital studies
  • specifics
    • July 2003 to 2004
    • used emergency dept patient bay with full-body hi-fidelity adult mannequin simulator
    • presented PGY1s with two dynamic test cases to manage
    • evaluation by expert raters
    • Cohort 1
      • presented once rested state and once after traditional 24 to 30 hour on-call night
      • n = 17
    • Cohort 2
      • subset of Cohort 1, n=8
      • did the first two trials, plus two more later in the year
      • presented at a newly rested baseline state then again after modified night call (16 hour shift)
    • Cohort 1 was to see if they could measure difference, Cohort 2 to see sensitivity of the instrument for detecting less amounts of fatigue
    • Protocols
      • warm up case
      • then two 15-minute standardized test (dynamic cardiac or pulmonary disease, followed by a code – vfib or vtachycardia)
      • had pulmonary and cardiac cases to present if student had prior experience in one field, got the other test
      • used validated tool
  • Results
    • show graph – significant drop in session score for Cohort 1
    • Cohort 2 showed fidelity of the setup, as performance after modified 16 hour call, though lower than second baseline rest measure, was significantly higher than traditional 24-30 hour call
  • conclusions
    • simulator-based performance correlates with real-world performance
  • limitations
    • single site test
    • not really comparing a simulation of the other study
    • could not blind participants to hypothesis

Dr. Bruce Fisher's summary of the group's reflection on our own Med Ed Journal Club

posted May 31, 2010, 2:56 PM by Anna Oswald

Twelve tips for conducting a medical education Journal club: Our score card

 

Based on a group reflective discussion exercise using checklists at MEdJC on May 26 2010

 

1.How do we feel our MEdJC will benefit our educator community?


There was complete agreement that our MEdJC  provides:

       learning opportunities for both junior and senior educators

       an atmosphere of collegiality and community between educators. The group reflected on how a critical mass of medical educators has developed over the last 5 years  that has allowed such a community to exist and grow /community of educators

       the JC provides an opportunity (already evidenced) to introduce colleagues to the world of medical education

 

2. What are the main goals of our MEdJC?

 

There was complete agreement that our main goals that the MEdJC serves are:

       Keeping up with literature

       Fostering  medical  educational  research (especially through the discussions and updates that occur after articles are presented and discussed

       To stimulate debate

       To learn about new and interesting resources (again often through stimulated discussions amongst members at the meetings

 

       Teaching critical appraisal skills and research skills were thought to be addressed implicitly but were considered primary goals

 

3. How well is the frequency and regularity of our MEdJC serving our needs?

 

There was consensus that a monthly meeting was most ideal, and that the alternate day scheduling provided important flexibility. The scheduling was seen as predictable and well advertised. Our group is uncertain as to the balance  merits and drawbacks of running JC through the summer months.

 

4.  Are we developing or recruiting a balanced group of attendees?

 

 There was complete agreement that there is good representation from basic science, clinical and educational scholars. The group felt it was important to invite interested undergraduate students, residents,  post-graduate trainees, and Masters of Health Science Education program students

 

5. Do we successfully vary leadership?

 

The group consensus was that by alternating who presents, we help to distribute leadership roles.

 

6.  Do we clearly articulate the criteria used for selection of articles

 

The group agreed that by agreed and gently reinforced conventions we  use and declare the following:

       topical / appeal to interests or needs of the group

       apparent applicability

       Degree by which informs our understanding

 

There was mixed response to whether we explicitly or regularly use or declare the criteria of potential impact or importance (akin to clinical importance in Users guidelines series)

 

7/11. Have we developed sufficient guidelines for the conduct of discussion

 

       Brief summary then insights/links references  include "Wrap up"

       Strengths weaknesses, key points, how might be done differently ..

 

8. Do we have the ideal number of participants (5-15)

 

There was generally agreement that the presently evolved group of 10-15 works well. There was discussion about strategies to invoke should our numbers of attendees grow, but no specific plans were made.

 

 

9.  Are we addressing the appropriate number of articles per session?

 

There was consensus that given the primary purpose of our MEdJC being to help us in "keeping up" AND to have sufficient group discourse during the hour,  that  4-5 articles discussed per session was most appropriate.

 

10. Do we (effectively) disseminate the results or "products" of our MEdJC

 

The group felt that WIKI was a useful and effective way of capturing and distributing our selections of articles and posted abstracts. There was discussion about greater use of the associated comments board and research ideas board. The group was uncertain as to the merits and utility of producing MEDCAT or MEDCAP products. A short discussion occurred regarding a possible separately formatted MEdJC sessions for this, perhaps occurring instead of our existing format in  a  1:4 ratio  or so.

 

 

12.  Are we regularly evaluating the MEJC regularly?

 

This is the first (informal) reflective and evaluation exercise we have conducted, about 9 months into the life of this MEdJC. Perhaps we could do this on a q 6-12 monthly basis. 

 

 

 

Marcia Clark's Discussion Notes April 28, 2010

posted Apr 29, 2010, 2:30 PM by Anna Oswald

Med Ed Journal Club – April 28, 2010

Marcia’s review papers

1Motivating Medical Students

  • What – group of educators in Finland developed a curriculum centered around teamwork skills for first year medical students. N=342

  • Why – it was felt that focusing on team work skills, in a deliberative manner would better prepare students for later clinical rotations and PBL sessions. Team skills, specifically communication skill, are not addresses early in medical education and motivation to learn this is a challenge. The “so what” effect was spun to the students in a way that this was going to be good for getting the most out of PBL. An ancillary benefit is that students would be exposed to active listening skills and kills for questioning on clinical reasoning. The long term effect would be that the skills learned now, would benefit the learner in their clinical practice and workplace for the future.

Focus of the curriculum was modular bases, and had a variety of educational deliveries (small group, lectures, pairs exercises)

  • Who – first year medical students, 3 iterations, with curricular review at the end of each subsequent year.
  • Where – University of Helsinki, Finland
  • When – 2006, 2007, 2008. 1st year, first courses along with basic science.
  • So what? – The curriculum was altered after the first iteration after feedback from the students was poor (not interesting or relevant topic). The subsequent curriculum was not changed for the next two years as the evaluations were improved and did not change. The only curricular change that was done was to point out the immediate clinical relevance to the students in a deliberative manner. i.e. the tangible benefits for the students were made clear and more explicit.

    Application to us? – we are constantly revising curricula for many learners, and sometimes it is not clear to the learner group (target learners) what the relevance is to them. Especially when we are following recommendations from other bodies (LMCC, RCPSC). This is exemplifies in our effort to establish the CanMEDS criteria in our curricula. Contextualizing the learning objectives and goals, for the learners, cannot be understated.

    Questions raised: How do we motivate our students to learn? Are we deliberate about pointing out relevance?

2 Refining the Evaluation of Operating room performance

 

  • Who – a group of surgeons and educators addressing psychomotor skills assessment for in-situ training environments.
  • What – an assessment tool (OPRS) was developed and validated for assessment of skills for a group of residents.
  • Why – assessment of skill (consistent and accurate) is difficult to do. Defined aspects of resident performance in the OR are not routinely measured or documented.

     A procedure specific assessment (laparoscopic cholecystectomy, excisional biopsy, open inguinal repair and open colectomy) was developed to address this. Involved task specific and GAS. (2001, validity confirmed – face, construct and content)

    The researchers wanted to understand

1. What is the elapsed time between posting and completion of evaluations?

2. Do ratings based on procedure-specific and generic items yield different results?

3. Are patterns of ratings across years of training different for different procedures?

4. Are the profiles of ratings across resident levels different for different raters?

  • Where – Southern Illinois University, R1-5 residents, 566 evaluations
  • When – 2004-2008
  • So what?

    Raters do not rate immediately after procedures (median 11 days) – this can affect validity of the scale in that it is difficult to assess skills quite some time after a procedure.

There was no difference in the rating between total score, procedure specific score and generic scale score.

As residents progress in their training, the tool is responsive to this – i.e. scores improve.

Raters did differ, especially in time to rate, however a hawk vs. dove effect was demonstrated. (bias)

Message to us? – assessment of procedural skills can be done in a rigorous manner but it requires timely feedback and multiple sources.

  

Visual Learning Paper3

Using visual educational resources to teach. Define important teaching point, find images that illustrate this, put images in accessible format, provide pre-reading. For educational session, students review images (5 min) and devise a plausible explanation (story). Students then present the stories. Discussion ensues.

Example: images of 4th birthday card, family tree with AD inheritance, blod smear of spherocytes and OR image of splenectory = hereditary spherocytosis.

Example: picture of an antibody, bruises, bottle of methylyprednisolone, dose of IV IG and accessory spleen = ITP

So what – neat way of engaging learners and thinking like a detective!

Reference:

1. Aarnio M, Nieminen J, Pyörälä E, Lindblom-Ylänne S. Motivating medical students to learn teamwork skills. Medical Teacher;32(4):e199-e204.

2. Kim MJ, Williams RG, Boehler ML, Ketchum JK, Dunnington GL. Refining the Evaluation of Operating Room Performance. Journal of Surgical Education;66(6):352-356.

3. Gow KW. Visual Learning: Harnessing Images To Educate Residents Optimally. Journal of Surgical Education;66(6):392-394.

 

 

Shelley Ross Discussion Notes from Feb 2010 Journal Club

posted Feb 18, 2010, 9:50 AM by Anna Oswald

Supervisor and self-ratings of graduates from a medical school with a problem-based learning and standard curriculum track

 Distlehorst, Dawson, & Klamen (2009) Teaching and Learning in Medicine, 21, 291-298

 

Reason for choosing article:

This article was intriguing in that it looked at long-term evaluation of graduates of a PBL program. Given our relatively recent foray into a PBL curriculum, I felt this article would be of relevance to this Faculty.

 

Summary of article:

Southern Illinois University School of Medicine has two curriculum tracks: a standard curriculum (STND), and a problem-based learning curriculum (PBL). They also have a long-term follow-up project where they collect data from and about their graduates as they progress through residency. In this study, ratings of residents were obtained across 3 main categories. Ratings were collected at the end of Years 1 and 3 of residency programs, and included graduates from 9 classes (1994-2002). Ratings were self-reports from graduates, and residency supervisor ratings of residents. The research questions all looked at differences between self- and supervisor ratings of the residents, looking specifically for changes over time and between programs. The researchers found that supervisor ratings did not differentiate between the STND and PBL groups at the end of Year 1, but did differentiate between the two at the end of Year 3. Supervisors rated STND graduates higher than PBL graduates in 5 of 6 noncognitive items and 2 of 3 general ratings. Supervisor ratings increased between Year 1 and Year 3 in 9 competencies for STND graduates, but showed no change across the two data collection periods for PBL graduates.  The largest increase in self-ratings between Year 1 and Year 3 for both STND and PBL graduates was in the area of overall competence in specialty area. The researchers do not draw any conclusions about differences between the STND and the PBL curricula.

 

Comments on the article:

The researchers list several short-comings to this study, and state that these short-comings are the reason why they do not present any conclusions about PBL compared to a traditional curriculum. Interestingly, they have one major finding that supports PBL curricula: there is good concordance of ratings between PBL residents and supervisors in all but three areas. STND self-ratings differ from supervisor ratings in 11 areas. Accurate self-assessment was not one of the research questions, however, and so this difference was not highlighted in the article, nor was it elaborated upon. This is unfortunate, as self-assessment is an area where physicians have difficulty. If PBL results in physicians who are better at self-assessment, it is worth talking about.

The reporting in this article was interesting. Total numbers of participants were not given for residents or supervisors, only percentage response rates. The response rates were aggregated across the full study, so there was no way to determine if there was a change over time for the PBL group – which would be expected, as the PBL curriculum was continuously refined over those years.

The long-term project that provided the data for this project is something worth considering here.


The effects of performance-based assessment criteria on student performance and self-assessment skills.

 

Fastre, van der Klink, & van Merrienboer (2010) Advances in Health Sciences Education

 

Reason for choosing article:

This article reports findings of a study comparing performance-based and competency-based assessment criteria. Competency-based assessment is a hot topic in medical education right now; I found it intriguing that this article finds that performance-based assessment criteria resulted in better outcomes.

 

Summary of article:

 The authors present background theory on the differences between competency-based and performance-based assessment criteria. They argue that for novice learners, competency-based assessment criteria are too vague and undifferentiated. They posit that novice learners need clear performance-based assessment criteria, broken down into lower level skills hierarchies. Their hypotheses are that novice learners given performance-based assessment criteria will learn better, and self-assess better, than their counterparts given competency-based assessment criteria. They also hypothesize that the competency-based assessment group will experience less mental effort in their learning. Thirty-nine second year students (2 males, 37 females; mean age = 18) in a nursing program at a European school.  Students were taught stoma care through lecture, asked to judge several video examples of the procedure, and then did a practical example. A short MC quiz was administered after the lecture. Students were given either performance-based or competency-based criteria to assess the video example. Students used these same criteria to assess each other and to self-assess in the stoma care procedure (a teacher also assessed students in the practical portion).  Students completed a questionnaire on their perceptions of relevance of self-assessment and their ability to self-assess before the study began, and another questionnaire at the end of the study measuring motivation, self-regulation, interest, task orientation, and reflection. Between each assessment task, students completed a rating scale of mental effort.  The researchers found that while both groups were at equivalent knowledge levels after the lecture, across the video assessments, peer assessments, and teacher assessments of student performance, the group given the performance assessment criteria scored significantly higher than did the group given the competency-based assessment criteria. The group with the performance-based assessment criteria also reported significantly less mental effort during the assessment. There was no significant difference between the groups on self-assessment. The authors conclude that performance-based assessment criteria allow novice learners to learn more efficiently, and to have a better understanding of what is expected of them. The authors state that these findings “yield the clear guideline that novice students should be provided with performance-based assessment criteria in order to improve their learning process, and reach higher test task performance”.

 

Comments on article:

This article falls squarely into the debate of what is the difference between a competency and a performance outcome? The authors make a very clear demarcation between performance criteria and competency criteria, following the definitions of Gregoire (1997), defining competencies as the constellation of skills, knowledge, and attitudes. Performance-based assessment criteria are broken down into higher-level or a number of lower-level criteria. The conclusions of the authors depend on these fairly rigid definitions.

The article is extremely well-written and well constructed. The arguments are presented in a clear fashion. However, the conclusions of the article rely on the assumption that there is no finer granularity to competencies than general statements of constellations of skills, knowledge and attitudes. The authors do not allow for competencies being stated as measurable outcomes.

The conclusions reached by the authors over-reach. The n was small, and the group was not representative (37 females to 2 males). Further, this was a discrete procedural task, made up of a distinct set of steps. Higher order thinking and learning was not needed. Finally, the competency-based assessment criteria were very vague, while the performance-based criteria were highly detailed. I would be cautious about interpreting these results given the bias shown in the assessment criteria.

 

1-6 of 6