Competency N
Evaluate programs and services using measurable criteria.
Data is becoming increasingly important for teacher librarians to prove that our efforts in programming and services are effective worthwhile. In order to collect data, our programs need to be evaluated using measurable criteria. Measurable criteria are objective statements about which you can collect data to determine whether and what to degree the objective has been met.
Criteria can be established for individual lessons or entire programs depending on the goals and mission of the library and school as a whole. In a school library, the AASL (2018) has outlined a school library evaluation checklist that corresponds to the domains of the School Library Standards for Learners. Under each domain there are criteria to evaluate the program at two levels: the individual school (building) level and the district level. These criteria are mainly all or nothing, it is happening, or it isn’t. This type of criteria can be helpful for determining next steps to take and where to spend limited resources. It can also be used to advocate for additional funding by highlighting gaps in programming.
Other types of criteria can be measured qualitatively. For example, reference interactions can be evaluated using guidelines from the Reference and User Services Association (RUSA). These criteria are more subjective, but still can provide useful information about the quality of interactions. While these customer service criteria are not necessarily intended for interactions in the school library, seeking students’ feedback about whether they felt their questions were satisfactorily answered and done so in a friendly way could mean the difference between a student (or staff member) coming back to the library for help again or not. It can also help to train library assistants and students working in the library.
Quantitative data can also be gathered and used to evaluate the effectiveness of programming. Gathering information before changes are implemented can give you baseline information to determine what impact your changes made. For example, pretesting students before an information literacy lesson and then assessing them again after the lesson can give you evidence about what worked and what didn’t. This is especially robust if paired with qualitative data such a student reflections about what they learned from the lesson.
Conclusion
I can use my knowledge of creating and using measurable criteria to evaluate the programming in my own library. For example, during COVID closures last year, I implemented new virtual interactive displays because students could not come to the library. This year I would like to evaluate whether I should continue this practice even though we will be in person. I can create a survey for English teachers asking for feedback in measurable areas like these samples: 1) Did you integrate the virtual display into a lesson? (I can count how many teachers did this.) 2) Which type of interactive display is most useful to your instruction? A) links to first chapters B) links to audiobook samples C) supplemental videos D) extra publisher content such as discussion guides E) all of the above (This can tell me which type of content to keep curating). A mix of multiple-choice and open-ended questions can give me quantitative and qualitative data for library use as well as to advocate and promote library services to my principal and other departments.
Evidence
Evidence 1: Discussion: Evaluating a reference interview using RUSA guidelines
I wrote this discussion post for INFO 210 Reference and Information Services. I called a public library and evaluated the librarian’s phone interaction using the Reference and User Services Association (RUSA) guidelines. The interaction was evaluated in the following areas: visibility/approachability, interest, listening/inquiring, searching, and follow up.
Evidence 2: Discussion: Evaluating information literacy instruction using rubrics
I wrote this discussion post in INFO 254 Information Literacy and Learning. In the post I discuss the application of rubrics to evaluate IL instruction as a way of collecting and comparing data on the effectiveness of the lesson and determining how well the lesson adheres to standards.
Evidence 3: Evaluating an online tutorial using instructional design principles
I did this assignment in INFO 250, Design and Implementation of Instructional Strategies for Info. Professionals. I used information design principles as criteria for evaluation such as clear, measurable objectives, an articulated purpose, and presence of formative and summative assessment opportunities to determine the quality of online instructional tutorials.
References
American Association of School Librarians. (2018). School library evaluation checklist. AASL. https://standards.aasl.org/wp-content/uploads/2018/10/180921-aasl-standards-evaluation-checklist-color.pdf
Guidelines for behavioral performance of reference and information service providers. (2020, February 4). Reference & User Services Association (RUSA). https://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral