Introduction:
Evaluation of activities and services needs to be an ongoing process at every information organization. LIS graduates should feel comfortable and have practice with evaluations. Librarians and information professionals, in particular, are the ones usually reviewing the services, resources, and activities provided, since not only do they utilize these services, but so do users. It is essential to gather data before, during, and immediately after each event for the personnel of the library to be able to evaluate how efficiently each program operates. More so, professionals tend to keep a close eye on programs and services because understanding what works and having data to back such claims may aid with grant applications, explain outcomes to board members and supervisors, and build projects that will benefit the community even more in the future. In the realm of virtual programming, the same expectations apply. Especially with a technology-driven world, it is beneficial to collect data offered by the analytics of social media apps or online platforms, whether that platform be YouTube, Instagram story views, Zoom, Facebook Live, etc. For this competency, I will demonstrate why it is beneficial to devote considerable effort to understand a logical method to evaluating and how it benefits programs and services.
User research is critical for libraries. The community's needs cannot be served if the library is unaware of them. Simply put, if the library does not know what its patrons need, it cannot provide for them. By using evaluation as a kind of reflection, LIS professionals can better respond to people's needs in the community. Results will give visual proof to illustrate the value of libraries to the user community, as discussed by Susan Alman in her chapter on communication, marketing, and outreach techniques (Hirsh, 2015, pp. 374–386). Budgets are another important factor to consider. The budget must be updated every fiscal year to pay the costs of programs and services, as well as collections, facility upkeep, technology, and workers. Advocacy efforts will be strengthened by proof of research methodologies that explain user needs (Hirsh, 2015, p. 374–386). For this competency, the goal of understanding the fundamentals of evaluation may feel overwhelming; however, a well-done evaluation serves internal library needs by assisting the library in achieving new goals or achievements, establishing a positive external presence in the public eye, allocating scarce resources to where they are most effective, providing a better understanding of their patrons, and, most importantly, serving community needs.
Evaluation Steps
Evaluation is an organized procedure with distinct phases. A "typical" assessment strategy is shown below using a logic model. In competency L, where I describe outcome-based planning and evaluation (OBPE), I briefly cover program assessment. However, in this competency, I want to focus on establishing a logic model that is relevant to diverse groups and where the evaluation process is adaptable to different experiences. In other words, because no two groups will approach evaluation measures the same, there should be a straight-forward procedure. I believe OBPE applies to committees and other groups whose members are usually on the same page or division. For example, in my IMPACT projects, I am not always paired with people from my same division and it's always approached differently. However, I noticed that the basic logical steps are the same. Thus, I will approach this competency as I would with various people and their diverse experiences (i.e., those not in the LIS program). This logic model is based on the W.K. Kellogg Foundation's "Logical Model Development Guide" (2006). It displayed proactive and outcome-oriented results with realistic practice, in my opinion.
Creating a logic model
Step 1: Plan the evaluation with questions and indicators such as the focus area, the audience, questions, information use, and technical assistance.
Step 2: Design instruments that best fit the data that needs to be collected.
Step 3: Collect data and consider sample size, area, and how the instrument will be distributed.
Step 4: Enter and clean the gathered data after the response or collection period ends.
Step 5: Analyze the data with software tools.
Step 6: Interpret the results with the team.
Step 7: Create the report with any notable questions or address any problems.
Step 8: Reflect and communicate with the team to present to the audience.
A logic model is essentially a comprehensive, graphical representation of a program. It defines the program's desired results as well as the route the program will follow to accomplish those outcomes. Additionally, it is a dynamic document; it should evolve in tandem with what the team finds out. Developing a logic model is an element of good program design, not evaluation. I include this stage since developing a good design and strategy for the team can save time and avoid stressful situations. Although logic models are to support program planning, creating a logic model will increase the quality of the evaluation significantly (Rossi and Lipsey, 2019):
Creating a language that is understood by all members of the team.
Specifying the program's underlying assumptions.
Increasing communication while also fostering clarity and transparency.
Contributing to the ongoing enhancement of quality.
Ensuring that objectives, actions, and results are all aligned with one another.
In the following sections, I will discuss in further detail about the mindset of evaluation, the many types of evaluation, the instruments that are commonly used for evaluation, and finally, the process of producing evaluation data, as well as evaluating and interpreting that data as a team.
Getting into the Evaluation Mindset
Evaluating anything requires a certain frame of mind in addition to being a logical procedure. It is far more important to understand how to think critically about evaluations than it is to master the specific methods involved in carrying out a formal evaluation. An evaluation mindset is similar to navigation in that it uses the environment, guides, maps, and data to contextualize the whole trip and determine whether one is on the correct track. A mindset for evaluation is to be curious about how things work, to think critically, to reflect often, and to question basic ideas and assumptions. It takes humility as well as confidence in the task and openness to try again. Students in LIS will benefit from developing their critical thinking skills, as it will make them better leaders, builders, and achievers of certain goals.
Types of Evaluation
There is no one correct or clear method for constructing an evaluation. At certain phases of a program, specific sorts of assessments are advantageous. Each of these categories covers unique issues and may use distinct techniques. The three types of evaluation I will discuss are front-end, formative, and summative (or impact) evaluation. The evaluation will be significantly better when one aligns the outcome with the method:
Front-End Evaluation: When contemplating the development of a program or attempting to get a better understanding of your community or its members.
Questions it answers:
What does the audience need?
What concerns do they have?
What are they excited about?
What program is best for this audience and these outcomes?
What is the audience’s incoming knowledge and understanding?
Formative Evaluation: At the beginning of your program or when you are in the process of putting it into action.
Questions it answers:
Will this work?
Is it working?
How might we improve it?
Is it meeting our audiences’ needs?
Summative (or Impact) Evaluation: After your program has been put into action and is functioning effectively enough to evaluate its effects.
Questions it answers:
What impact did our program have with stakeholders?
Did our efforts achieve what we hoped to achieve?
Did we have our desired outcomes?
Instruments used for Evaluation
Data collection instruments might take the form of a survey, an interview form, a feedback board, or any other tool that facilitates data collection. Research methodologies such as action research, bibliometrics, case studies, content analysis, correlational research, ethnography, historical research, and surveys are a few examples of some additional methods (ALA). When it comes to data collection, one of the most common forms is the survey. A term I often read about in literature are the terms "validity" and "reliability". Both terms are important when it comes to research and assessment. A "valid" instrument has been shown to be able to measure what it is meant to measure, and a "reliable" instrument will keep its level of accuracy over time and in different settings. For example, a test that looks at whether or not a person likes math is not a good way to tell if that person would pursue a STEM field. Instead, it is better to utilize closed-ended questions to gather and evaluate information more effectively. These may be multiple-choice questions with pre-determined responses or rating scales. Incorporating meaningful response categories for gender, color, ethnicity, wealth, and other personal inquiries will further communicate how much the researcher values them. Open-ended questions require more work to respond to and interpret, so they should be used rarely. When one does not know what replies will be provided or when extensive solutions are required, open-ended inquiries are preferable. Evaluation tools that seem like a natural extension of the program will elicit a more positive reaction (Kolderup, F.T., 2008).
After that, after all of the data has been inputted and cleaned up, the team will be prepared to do an analysis on it. In this case, "analyze" refers to the process of condensing and representing the data, as well as maybe conducting statistical tests. The majority of the time, the data being evaluated will need descriptive statistics, such as percentages, to illustrate groupings. If demographic data are available, it is typically easier to begin there. Before doing more analysis on qualitative data, it will likely be necessary to "code" it too. For example, creating a set of categories to summarize the replies, labeling the responses with the categories, and then tabulating the number of responses in each category constitutes this procedure. As part of the coding process, it will be important to develop a "code book" with a description of the categories and samples of replies that suit each category (Osborne & Nakamura, 2000).
Interpretation and Report of the Data
During the analysis phase, I typically collect and summarize the data. While these summaries are essential, the context is still required. The next stage is to determine the significance of the data. Every effective assessment must contextualize the data and explain to the stakeholders what questions the data answers and how to utilize the data. Frequently, the why is the most crucial aspect of interpretation. The data may not immediately explain why a given event occurred the way it did. There are a number of other concerns that should be addressed during interpretation, such as whether or not the findings corroborate or contradict those of prior research, how the findings compare to staff expectations, and whether or not the findings suggest any unprecedented challenges. It is important that the findings address the future of the information institution.
An evaluation report has at least three basic sections: one describing the project, one describing the evaluation, and one describing the outcomes. Other than the main body of the report, it may also include a table of contents, a list of attachments, a logic model, and suggestions for future action, among other things. Funders, partners, objectives, target populations, projected timelines, and planned activities are all fair game when outlining the initiatives (Hirsh, 2022). Sample, methods, the manner in which the data were collected, the time at which the data were collected, the number of respondents, the response rate (if appropriate), and any potential biases may all be included in the description of the evaluation design. (Kolderup, F.T., 2008) Finally, the explanation of the findings may include a summary of the outcomes, conclusions, and suggestions, as well as information about when the data was acquired and any potential biases. Even if they're just a line or two long, these additions to the report guarantee that the results are given in context and help the reader better understand them.
Concluding Thoughts
Everyone goes through nearly the same logical steps when evaluating, but the evaluations are not all done the same and the strategies vary depending on the team. It is important that data be stated and easy to measure so that it is possible to analyze and improve the desired focus. Evaluation also enables the library to establish if its goals are being met and to discover how it may become more effective. I discovered that data can be utilized to tell a narrative and is founded on quantifiable criteria that highlight crucial results. I also discovered that the data is based on criteria that can be measured. This competency will be completed by presenting and discussing various programs and services in terms of how they indicate mastery.
Libraries must justify their activities and services by describing what they are doing and why, and by showing how they have effectively enriched the lives of their patrons (Hernon & Schwatrz, 2015). The goal of collecting data frequently (Rossi et al., 2019, p. 291-306) is to motivate libraries in enhancing a program, strengthening library advocacy, comprehending a program's impacts, and guiding decisions about the program's financing or structure. Different procedures and methods may be necessary for each assessment, making each evaluation unique. The evaluation process is influenced by the objective of the evaluation, the conceptual and organizational structure of the program being assessed, and the available resources (p. 31–91). By basing their planning and decision-making processes on factual information, libraries are better able to demonstrate their worth to the community and increase the quantity and quality of the services they provide (Hernon et al., 2014).
Personally, I have found so much value when it comes to working with groups not in the same field. I was offered perspectives I would have not noted if I was working alone. My first evidence piece had been uploaded to competency E, but I wanted to further explain our process because of how much the team process taught me. Our meetings were often more than an hour and most of the time dedicated to how we wanted to evaluate our project. I had been used to instantly collecting data and never understood why it made me feel overwhelmed after. When we had a logical approach to our process and then explored the reasoning, it did sincerely save a lot of time. For my second evidence, I used a well-known guideline to evaluate important factors when it came to library service. Lastly, I was bit hesitant to share this piece because it is still a work in progress, but the process was challenging and a great lesson when I realized my team members were not in my department nor had any idea of what it meant to evaluate a service. I know it is still in the midst of development, I want to highlight how far my team had come when trying to balance out the experience field. Their experience is very important and I think as long as there is a logical model to follow, anyone can pick up how to approach the evaluation process.
By fostering an atmosphere where decisions are based on facts, evaluation may save libraries time and money. Evaluation helps libraries by demonstrating how they may successfully fulfill their objectives and engage with their users. It illustrates the importance of libraries in communities by demonstrating their relevance. Furthermore, the outcomes of an adaptive technique may enhance how work is done inside the library. This skill demonstrates that MLIS graduates can get acquainted with the issue of evaluation and will be successful in conducting assessments of library programs. Again, I have only just scratch the surface of the issue of assessment, they are far from exhaustive. Below I have noted two questions I have asked myself whenever I felt stuck during an evaluation process. (Hirsh, 2022: Kolderup, 2008: Matthews, 2018: Rossi et al, 2019).
What intended outcomes does this program have?
Ideally, before to starting any program. Utilize "backward design" to create better and more successful programs by beginning with the program's intended conclusion. In backward design, libraries describe their desired goals prior to contemplating how to assess them or what activities may contribute accomplish them.
What is the level of outcome desired?
Project outcomes define the changes in the level we want to see as a result of the project's actions. The level at which a project's results are measured is not limited to that of the person; they may also be seen at the communal, ecological, or institutional levels. The results should back up the strategy for the organization and maybe even a local or regional plan. Possible internal library results include revisions to the whole library system or to a single department or branch. The outcomes of a program should indicate the results or effects that occur as a direct consequence of the activities and services provided by the program.
When I first started creating evaluation processes for both my MLIS courses and in my professional work, I always felt overwhelmed. Much of this probably came from my inexperience with logical models, an evaluation mindset, and an overwhelming amount of collected data. I had a better idea of where I was at in the evaluation process when I learned about research methods. I mentioned in Competency L the assortment of both qualitative and quantitative methods for measuring user satisfaction, economic impact, and social impact in library programs and services. On the qualitative side, open-ended comments or suggestions such as interviews, surveys, questionnaires, and focus groups These research methods provided a professional and tangible performance objective that I did not know where to apply until I had a fundamental understanding of an evaluation strategy plan and the content within. I had to better my planning process in order to better gauge the perceived value of particular stages of planning. As a result of SJSU, I am now able to assess the quality of information presented to me. I now understand how to establish new programs and how to build up assessment criteria for existing ones based on an institution's purpose and vision statements. When conducted correctly, evaluations provide valuable insights for a wide range of stakeholders, including internal groups like workers and administrators, as well as external groups or grants.
This evidence piece was used in competency E to evaluate information retrieval systems. I present it here again because we were also evaluating the website layout of the Easton Area Public Library. We thought it did not meet the user expectations of contemporary websites. From the user's point of view, websites are essential user services. They are especially important for public cataloging because they are often the only way a user can search a library's collections and find relevant research materials. We looked at how easy it was to look around, how well the collections grew, how well their collections were put into categories, and how satisfactory the information was about logistics. One of the most valuable aspects that libraries do for their customers now is give them access to information and programs through their websites.
For this group assignment, my classmates and I evaluated the organization based on usability and easy of use. Each of us was responsible for a piece of the paper, with mine being the redesign of the previous website's site map, which I designed on Canva. I also created a flowchart of the present structure of the website. For our last meeting for this project, we all went through the revised site map again too. It should be professionally designed and constructed (Hursh, A., 2020), implying that there are objective standards for evaluating a site. This work demonstrates that I am capable of conducting an evaluation of a service, such as a library website, using more rigorous standards and proposing methods to provide more quantifiable criteria in order to get insight into how to improve the service. If I were to add another segment to this assignment, I would have recommended surveys and usage data for additional quantifiable insights. It would have been helpful to reference other successful library website to push for change.
For my second piece of evidence, I present my phone service review from a public library. I used the ALA's RUSA guideline since it is generally recognized and cited in professional literature. I describe my experience with a librarian and how I applied it to the RUSA principles as a kind of quantitative criteria to assess a reference transaction. This discussion taught me how the RUSA guidelines can help librarians perform a successful discussion with their patrons. I also believe it can help librarians evaluate which parts of the areas worked well, which areas to improve, and which areas to avoid in their next conversation. Though librarians are taught what a reference research may entail and how to obtain material for users, we must also learn how to effectively transfer those abilities to the online context.
If I could add onto this assignment, I would have included an instrument table with statistical value and called the same branch at various times or on other days, or different branches. I also realized that I had never tried to virtually chat with a librarian. It would have been beneficial to do research to uncover the subtle differences between these kinds of interactions.
My participation in the IMPACT professional development program held at the El Camino College (ECC) library is discussed in the competency C section of my ePortfolio. I am working with another group for this project. In addition, these members have not used evaluation methods or presentation processes. This project I am presenting for this competency is dedicated to reevaluating the calculator program at ECC. (The competency C IMPACT project focuses on rewriting the study room policy based on welcoming solo study.) This is an ongoing project with a completion date set for December 2022. My team members and I are still collecting data, but we have the tools necessary to build an assessment criterion in order to examine the improvement requirements. Below you can see our IMPACT grid, brainstorming slide template outline, and an WIP PowerPoint presentation. This project also emphasizes the redesign of either the calculator policy or a call for funding based on quantifiable measures.
Our mission is to ensure that students who face barriers to learning due to a lack of resources have access to the technology they need. Graphing calculators are an integral component of technology in the classroom. A graphing calculator is a costly piece of technology that a student may not continue to use after they have completed their math requirements, making it prohibitive for low-income learners to acquire one. Therefore, I offer this as evidence because, although the ECC library has been successful in circulating calculators for semester-long checkouts, there is a considerable scarcity since so many students who came to use our program ended up empty-handed. We utilized the stories and experiences of the students who were unable to participate in checking out a semester-long calculator (which is how the problem was raised), and for the study's future, we are still collecting statistics using survey data. After gathering the data, cleaning it up into a presentable form, and team interpretation, we will discuss possible solutions to the present and faculty staff. The next phase will be to communicate on what we want to share with others, such as a clear policy on where to go and what to have ready, real-time amount of calculators left, and even short-term checkout solutions if the semester-long calculators are impacted. So far, I created the calculator survey, the slideshow template for all of us to edit, and the brainstorming timeline on who will present what. The presentation is not done, but I still wanted to present our current progess.
The importance of assessment in the field of information science was a subject that ran throughout the curriculum. Evaluating how well a program or service meets the demands of its target population is an important step in making those improvements and maintaining that level of service. Information professionals may better meet the requirements of their communities by measuring their knowledge needs and then using that information to guide their choices thanks to careful planning of review. As I continue to work in the information world, I want to keep in mind that feedback is a chance for me to improve as a professional librarian and provide patrons, students, and faculty the finest service I can offer. Measurable criteria will always, in my opinion, be a set of specific guidelines that result in an honest evaluation of a particular service and an explanation of it to others. Measurable standards are essentially justifications for real-world evidence. In addition, I find that conducting such evaluative projects and activities as an information professional is essential for making decisions regarding those programs and services and for establishing the value of those programs and services, which in turn helps prove and communicate the value that the library provides as a whole. My courses has equipped me with the knowledge necessary to carry out significant assessments of programs and services in any information organization for which I may find employment in the future.
Cole, N., Walter, V., & Mitnick, E. (2013). Outcomes Outreach The California Summer Reading Outcomes Initiative. Retrieved from http://publiclibrariesonline.org/2013/05/outcomes/
Hernon, & Schwartz, C. (2015). Exploring the library’s contribution to the academic institution: Research agenda. Library & Information Science Research, 37(2), 89–91. https://doi.org/10.1016/j.lisr.2015.05.001
Hirsh, S. (Ed.). (2022). Information services today : An introduction. Rowman & Littlefield Publishers, Incorporated.
Kolderup Flaten, T. (2008). Management, Marketing and Promotion of Library Services. Based on Statistics, Analyses and Evaluation. Berlin, New York: K. G. Saur. https://doi.org/10.1515/9783598440229
Matthews, J.R. (2018). Evaluation: An introduction to a crucial skill. In K. Haycock & B.E. Sheldon (Eds.), The portable MLIS: Insights from the experts. (2nd ed., pp. 255-264). Libraries Unlimited.
Logic Model Development Guide. (2004). The Trustees of the W.K. Kellogg Foundation.
Osborne, & Nakamura, M. (2000). Systems analysis for librarians and information professionals (2nd ed.). Libraries Unlimited.
Rossi P. H. Lipsey M. W. & Henry G. T. (2019). Evaluation : a systematic approach (Eighth). SAGE.