"Bad libraries build collections, good libraries build services, great libraries build communities."
R. David Lankes
Introduction
Librarians must assess the programs and services they provide, in order to evaluate whether they are meeting their expected goals. The criteria they judge their effectiveness should be realistic yet rigorous. It is not enough to do the bare minimum and call it a day. Libraries have experienced harsh budget cuts since the 2008 Great Recession and some even faced closure. The Covid-19 Pandemic has been another major setback for libraries as well. Through evaluation, librarians can address what works, what does not, then making necessary adjustments and revisions. The purpose is to improve outcomes, goals, and objectives. Libraries can share data on improved performance to stakeholders and advocate for more funding and support. However, the real test is if the public’s opinion of the library improved. If it did not, then there will need to be another more focused inspection as to why people’s opinions of the library did not improve after adjustments to service were made.
Evaluate Programs
Reference and User Services Association (2007) states that “to determine levels of service effectiveness, costs, benefits, and quality, data must be judged in light of specific library goals, objectives, missions, and standards. A variety of measures such as quality or success analysis, unobtrusive, obtrusive or mixed observation methods, and cost and benefit analysis provide invaluable information about staff performance, skill, knowledge, and accuracy, as well as overall program effectiveness” (para. 18). Program evaluation could mean conducting surveys (either print, online, or both), observing patron reactions throughout the program (direct or indirect), organizing focus groups, or interviews with participants one on one. Once data is collected, it needs to be analyzed and findings must be formally written down. After these steps are taken, librarians will have a better understanding of the success of their programs.
Evaluate Services
The steps for evaluating services are similar to that of evaluating programs with a few caveats. For instance, it can be more difficult to assess one-time interactions with patrons at the reference desk. The patron may not have the time to devote to filling out a survey or answering interview questions about the service. Although, one method that is advantageous is online surveys that may be taken at a later time and in a different place if needed. So the onus is on librarians to study how well their services are meeting expected goals, outcomes, and objectives. There are a few different ways to go about this task. Statistical data can be collected for the number of patrons that used reference services; used computers and other technology; visited the library; visited the library’s website; and utilized subscription services to databases, e-journals, or streaming services, etc. However, this data does not hold much value on its own. Context needs to be provided to understand what the data means for the library. Did foot traffic in and out of the library increase or decrease? When did it happen? Does the library need to adjust its hours to suit this pattern of visitation?
Another method in evaluating services used is the records of patron transactions. Did the patron seem satisfied with the service they received? What emotions did they display during their visit? Does the librarian think it is likely that they will return for services in the future? Obviously, these questions are subjective to the librarian’s opinions, biases, and ability to judge such interactions. Additionally, human error should be considered when analyzing the results of such evaluations. For instance, did the librarian actually record any information? Did they do so completely? Did they forget or get too busy to make notations? Also, how uniform is the evaluation tool? Is the library relying on librarians to make up their own criteria for measurement or are they accountable for a set standard of questions? These factors are important to iron out before implementing the evaluation requirements and adjust based on librarian and patron feedback. Evaluation is an ongoing endeavor that needs to be cultivated and customized to suit the needs of the library.
INFO 210 - Digital Reference Comparison & Evaluation
This discussion post compared and contrasted virtual reference services from two different academic libraries for my Reference and Information Services course demonstrates my understanding of competency N. I was required to evaluate how well the interviews adhered to Reference & User Services Association (RUSA) guidelines, how easy or difficult it was to use this method, and explain why it mattered. We were not allowed to use San Jose State University’s King Library for this assignment. I chose to evaluate the chat reference services of the Loyola University Chicago Library and the University of Georgia Libraries. This evidence gave me insight into how academic libraries conduct reference services via chat. It is important because I think internet-based reference services will be more popular due to Covid-19 pushing more people online.
INFO 210 - Phone Reference Interview Evaluation
I did a discussion post for my Reference and Information Services course where I had to evaluate a synchronous phone interview with a reference librarian. This assignment demonstrates my ability to evaluate a library service by measurable criteria. In this case we were instructed to use RUSA’s guidelines for reference service. I contacted the Daly City Public Librarian and asked questions about visiting the library to shadow a librarian while they conducted reference interviews via phone. Unfortunately, due to Covid-19 the librarian informed me that they were not letting the public into the building. Even though I am an employee of the library, I still was not given permission to shadow the librarian. This prevented me from doing the live interview observation that this class normally requires students to do. This evidence gave me experience with working with standards for evaluative criteria in professional practice.
INFO 261A - YA Event Planning Paper
I designed a program for my Program and Services for Young Adults (YAs) course that aligns with competency N. I used a design chart to help me plan out the steps of the program. After identifying a local YA information community of anime and manga enthusiasts, I created an Anime Convention at the Library program for the group. Specific elements of the program were meant to be collaborative with the YAs to give them a sense of ownership and agency in the program. At the end of the design process are an event outcomes assessment and institution-building/promotional steps (to be taken after data is gathered). Survey data, patron responses on social media, and photos shared online were the evaluative methods that I chose to assess this program. The outcome data is meant to go to library administration, the Daly City Public Library Association, and parent organizations for advocacy for future library programs and to inform them of what the library is doing. This evidence gave me experience on planning a program with a built-in evaluation criteria.
Evaluation of library programs and services is an essential part of librarianship. The purpose of this is to build strong relationships with people in the community. In order to do that we have to make sure that we are providing programs and services that our patrons want. It is not enough to just get bodies in seats for these events. We want enthusiastic engagement that meets a learning outcome or cultural fulfillment for people.
Reference and User Services Association. (2007, December 11). Measuring and assessing reference services and resources: A guide. American Library Association. Retrieved April 18, 2021, from http://www.ala.org/rusa/sections/rss/rsssection/rsscomm/evaluationofref/measrefguide