------------------------------------------------------------------------------------
Evaluation is something that is conducted in all types of businesses, institutions, and organizations. Libraries and information centers also take part in evaluation activities. But what does evaluation mean in a library setting? Evaluation can be defined as the “process of determining the success, impact, results, costs, outcomes, or other factors” of activities, programs, or services (Haycock & Sheldon, 2008). Evaluation is vital in libraries because the results derived from evaluations are sometimes the only way of demonstrating the value of the library to stakeholders, community members, library management staff, and so on. Evaluation is also used by libraries to assess library functions and operations to check if they are still working properly. But most importantly, the results from evaluations can inform libraries of their areas of weakness—areas where improvement is needed.
Many different features of a library can be evaluated, such as a library’s services, programs, or resources. Evaluation can be done on the individual level (an evaluation of the librarians), or it can be done on a larger scale (i.e. a system-wide evaluation of reference services). Furthermore, evaluation can be summative (conducted at the very end of a program or service), or it can be a formative process (conducted continuously over the course of a program or service) (Haycock & Sheldon, 2008). There are also many methods in which evaluation can be conducted. The method(s) of evaluation used will be determined by what is being evaluated. An evaluation of children’s programming might require in-person observations or user satisfaction surveys aimed at parents; while an evaluation of a library’s material-holding service might require an analysis of library circulation statistics. All of these methods will produce evaluation data which can then be studied.
Before any evaluation activities can occur, however; it is extremely important that a set of criteria be established beforehand. “Criteria” is defined as the “test, principle, rule, canon, or standard, by which anything is judged or estimated” (OED, 2018). Any type of evaluation that is conducted will need to be based on criteria. This is because criteria provide a clear and consistent means of measurement that can be used to interpret evaluation data. Conducting an evaluation without the guidance of measurable criteria will lead to situations where one is (for example) unable to accurately gauge the success or failure of a newly implemented service; because no measurement for success was instituted to begin with. In a library setting, criteria can be used to measure the extensiveness, efficiency, effectiveness, quality, impact, and usefulness of a service, program, or resource (Haycock & Sheldon, 2008).
When evaluation is undertaken in a library, officially established criteria can be used. One example of an officially established criteria is the RUSA (2013) “Guidelines for Behavioral Performance of Reference and Information Service Providers.” This document can be used to evaluate reference librarians and the library’s provision of reference services; and it can also be used as a reference point for improvement. For example, if the reference librarians in one library branch were found through an evaluation to be lacking in visibility (as stated by the RUSA guidelines), the library can then take steps towards making the librarians more visible. This can perhaps be done by implementing better signage (as suggested by the RUSA guidelines) to point more library users to the reference desk. This action would likely lead to an improved reference service—one that is used more often by library users.
A second example of officially established criteria is the YALSA (2015) “Teen Programming Guidelines.” This document should be consulted if a library were seeking to evaluate and improve their teen programs. As an example, if the library’s current teen programming is under-attended and it was later found through an evaluation that these programs were developed without any teen input; the library can then follow the criteria outlined in guideline 3.0: “Facilitate teen-led programs” (YALSA, 2015). This would likely lead to the development of improved teen programs—programs that would be more relevant to teen interests and would therefore be better attended. It is important to note that evaluation can also be conducted using criteria that has been developed unofficially by library staff. An example of this would be a set of criteria developed specifically to evaluate the popular book displays within one particular library. What is most important in this case is that the criteria is agreed upon by all library staff, and that it is used consistently.
As has been demonstrated in the paragraphs above, evaluation using measurable criteria is an absolutely essential component in the improvement of library services and programs. Evaluation activities help to identify problem areas in the library: areas were quality is lacking; where the library budget is being used disproportionately; what programs or services are being underused; or whether the current technology used in the library is becoming too old; just to name a few examples. By studying the resulting data produced from an evaluation, libraries can then react accordingly and appropriately to remedy identified problem areas. Partaking in constant evaluation and improvement allows a library to deliver the best/most relevant/most needed services, programs, and resources they can offer; to the largest amount of library users. To end, all libraries will need to engage in evaluation using measurable criteria, because it is only through constant evaluation and improvement that libraries can remain relevant to the public and their clientele.
I have had experience with evaluation all throughout my life. My knowledge in certain subjects has constantly been evaluated in academia. I have also been evaluated in a workplace environment. But I admittedly did not have much experience in professionally evaluating things until I entered the MLIS program. The concept and process of evaluation was discussed in nearly every class that I took during my time in the MLIS program, because an evaluation is the first step taken when one is trying to improve upon services and programs. However, the necessity of measurable criteria was not truly stressed to me until I took Info 210 (Reference and Information Services). It was in that class that I realized how important it was to have a set of measurable criteria upon which evaluation data can be judged.
Haycock, K., & Sheldon, B. E. (Eds.) (2008). The portable MLIS: Insights from the experts. Westport, CT: Libraries Unlimited.
Oxford English Dictionary (2018). Criterion, n. Retrieved from http://www.oed.com.libaccess.sjlibrary.org/
Reference and User Services Association. (2013). Guidelines for behavioral performance of reference and information service providers. Retrieved from http://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral
Young Adult Library Services Association (2015). Teen programming guidelines. Retrieved from http://www.ala.org/yalsa/teen-programming-guidelines
1. Info 210 Evaluation of Face-to-Face and Phone Reference Services
The first piece of evidence that I am submitting towards competency N are two discussion posts from Info 210 (Reference and Information Services). For these graded discussion posts, I was tasked with writing down my experiences of two reference transactions in which I was the library user posing a question to the (reference) librarian. The purpose of these discussion post assignments were to allow me to see how reference interviews are conducted in a real-life scenario, and also to give me practice in evaluating the effectiveness and quality of reference services. The librarians that I interacted with did not know beforehand that I was initiating a reference interview in order to observe and evaluate them.
To complete these assignments, I asked two different reference questions at two different public libraries—the first question was asked in-person, and the second question was asked over the phone. After each reference transaction was completed, I wrote down my observations. I supplemented my observations by referencing the RUSA guidelines for the delivery of reference services. These guidelines provided measurable criteria for several different aspects of a reference interview (mainly focused on the reference librarian’s behavior), which included: visibility/approachability, interest, listening/inquiry, searching, and follow-up. I made sure to constantly acknowledge these criteria when I described my experiences. This establishes my understanding of how measurable criteria is used to evaluate a service.
While I was quite satisfied with my face-to-face reference interview experience, my phone reference interview left me feeling disappointed. In my discussion post for that reference interview, I listed several ways that the librarian could have provided a better reference service (based on the RUSA guidelines); for example by showing more interest, asking more questions (inquiry), and providing a better follow-up to end the interview. This demonstrates my ability to identify ways in which reference services could be better delivered, based on officially established criteria. Even though I was mostly satisfied with my face-to-face reference interview, I still listed a few ways (based again on the RUSA guidelines) in which the librarian could have provided an even better reference interview, such as by asking more clarifying questions. This demonstrates my understanding of how evaluation can be used to further improve upon the delivery of an information service.
I have combined the two discussion posts into one MS Word document. The document can be found below.
2. Info 210 Assignment -- "Shadowing a Reference Librarian"
My second piece of evidence is another assignment from Info 210 (Reference and Information Services). For this assignment, I shadowed a reference librarian (who I called “K”) at a public library, and observed how she provided reference services to users. This assignment gave me a glimpse at the job responsibilities of reference librarians who worked in a public library setting. This assignment also allowed me to evaluate reference services from the perspective of the librarian.
The bulk of my paper details my observations as I was shadowing K. Throughout my paper, I heavily referenced the RUSA guidelines to evaluate how K delivered reference services to library users. I took note of K’s actions before library users approached with a question, and what actions she took during the reference interview, and the methods she used to answer users’ questions. I also highlighted the specific RUSA guidelines that K successfully met throughout my paper—i.e. when she ended all reference interviews by asking if library users needed anything else, this fulfilled the “follow-up” criteria of the RUSA guidelines. This establishes my knowledge of the measurable criteria guiding reference services, and also how they can be used to conduct a thorough evaluation of reference services.
While I ultimately did not find anything “lacking” in how K delivered reference services (she fulfilled all of the RUSA guidelines that were applicable to any given situation); observing K did make me realize that any long-running/permanent service in a library would require constant evaluation. And in K’s case, this would require constant self-evaluation. K provided great reference services over the two days that I shadowed her. However, she would also be expected to provide the same quality of service for the rest of her time as a reference librarian; as this is part of her job responsibilities. Providing this level of reference service requires a deep understanding of the RUSA guidelines, and constant self-evaluation to ensure that you are still meeting the criteria outline by RUSA. For these reasons, my completion of this assignment demonstrates my understanding of how important it is to have measurable criteria guiding information services. It also demonstrates my understanding of how constant self-evaluation can sustain the quality of, and can also help to improve upon, the provision of an information service.
The completed assignment can be found below. (This assignment was also used as a piece of evidence for competency I.)
3. Info 202 Discussion Post on Website Usability
My third piece of evidence is a discussion post from Info 202 (Information Retrieval System Design). For this discussion post, we were asked to evaluate a library’s homepage using Nielsen’s guidelines for homepage usability. I decided to evaluate the homepage/website of a large public library system. We are now living in an age where people expect everything to be on the internet. Library websites now often provide features where library users can put books on hold, renew books, access databases for pertinent information, and even access e-materials. Thus, it is important to evaluate the usability of a library’s website/homepage, as it is the portal from which a library’s online services are delivered.
In my discussion post, I carefully studied all the features of the library's website using the guidelines outlined by Jakob Nielsen. This discussion post establishes my ability to apply a set of measurable criteria to a homepage in order to assess its usability. While the library's homepage was overall quite good (based on Nielsen’s usability guidelines), I still provided some suggestions on how the homepage could be further improved. This establishes my understanding of how services can always be further improved based on evaluation criteria and evaluation data. For all of these reasons, I submit this piece of evidence to demonstrate my ability to determine the usability of an online service through evaluation using measurable criteria.
I have decided to remove this document due to privacy reasons.
4. Independent Evaluation of a Children's Storytime Program
My final piece of evidence is an independent evaluation I conducted on the preschool storytime program at a public library. I relied on observations as my method of evaluation, and the evaluation criteria was something that I developed myself after years of experience as a library volunteer. I evaluated the quality of the storytime program based on factors such as the level of interactivity that was included, and also how well the program doubled as a teaching moment.
By conducting this independent evaluation, I learned many things about the evaluation process and about measurable criteria. First, I realized that the set of criteria I created had a very limited scope. The criteria I developed was intended to measure the quality of a children’s storytime program. However, as it was based solely on my own experience, I could only confidently state that the children’s storytime program was good—in my personal belief. This leads me to the second point that I learned about creating a set of established criteria: it should be created only after consulting others (preferably experts on the topic the criteria intends to measure); and it should also be based on past research and known best practices. This helps to create a standard set of criteria that would be useful to a wider audience of librarians and organizations. It would also be a set of criteria that would reliably lead to the development/provision of better programs and services. Third, after my attempt at creating a set of criteria, I realized that criteria can, and should, be updated over the years as new studies and new best practices emerge. It is only by doing so that criteria can remain relevant and contribute to the constant improvement of library services and programs. For all of these reasons listed above, I submit this piece of evidence to demonstrate my understanding of the importance of measurable criteria and evaluation in the LIS field.
The document containing my observations and evaluation of this library's storytime program can be accessed below.
Although I do plan to work in a public library after I graduate, I am sure that evaluation will be an important part of my job responsibilities no matter what type of library I work in. It is only through evaluation that one can discover areas of weakness. I plan to participate not only in the evaluation of services, programs, and resources; but also to engage in continual self-evaluation during my time as a librarian. Of course, evaluation means nothing without measurable criteria to back it up. I will constantly be referencing officially established criteria such as RUSA; and I will also strive to be aware of other officially established guidelines for other areas of librarianship. Through constant evaluation, I hope to achieve continual improvement as a librarian.