Evaluation is essential for information professionals to assess the effectiveness, efficiency, and impact of services and programs. As Hernon and Altman (2010) emphasize, "Evaluation transforms abstract values into concrete measures of institutional performance," enabling data-driven decisions about resources, services, and strategic planning in an accountability-focused environment. Meaningful evaluation requires carefully selected measurable criteria—both quantitative and qualitative standards—that provide objective frameworks for examining programs and services. While quantitative metrics offer comparable data points like usage statistics and attendance figures, qualitative approaches provide essential context through methods like interviews and open-ended surveys. Together, these approaches create a comprehensive picture of program effectiveness (Applegate, 2013).
Evaluation in information settings includes input focused measures (collection size, budget allocation, etc.) as well as outcome-based approaches that emphasize tangible impacts on users and communities. Strategic planning and evaluation are deeply interconnected processes in information organizations. As Rosenbaum (2019) notes, "the process of creating and implementing a strategic plan focuses the attention of the organization on its mission and identifies places where resources might be more effectively directed while at the same time reinforcing a sense of teamwork." This alignment between strategic goals and evaluation metrics ensures that assessment activities support institutional priorities and provide meaningful feedback on progress toward organizational objectives.
Digital environments present both unique challenges and opportunities for evaluation. They generate unprecedented data on user behavior and interactions, but translating this information into meaningful insights requires careful consideration of what constitutes success in virtual spaces. For digital collections and virtual reality experiences, traditional metrics like circulation statistics must be adapted or supplemented with engagement measures, user satisfaction ratings, and learning outcome assessments that reflect the unique affordances of these technologies.
In today's rapidly evolving information landscape, evaluation serves not merely as a retrospective activity but as a forward-looking strategy enabling professionals to adapt to changing user needs and technological developments. Effective evaluation practices acknowledge the dynamic nature of user expectations and technological capabilities, particularly in digital environments where user behaviors and service delivery models continue to evolve rapidly. Information professionals must continually refine their evaluative approaches to remain responsive to these changes while maintaining focus on core institutional missions and user needs.
By applying well-defined, measurable criteria to assess programs and services, information professionals demonstrate accountability, facilitate continuous improvement, and ultimately enhance the value they provide to their communities. This systematic approach to evaluation ensures that information organizations remain relevant, responsive, and effective in fulfilling their essential roles in an increasingly complex information ecosystem.
My ability to evaluate programs and services using measurable criteria is demonstrated through three significant projects that showcase my comprehensive approach to assessment across diverse information contexts.
1. My development and implementation of user experience surveys for virtual reality environments, specifically the "Freedom to Read" and "Children Draw War, Not Flowers" projects is compelling evidence showcasing my dedication to quality UX in digital environments. These surveys exemplify my ability to create multifaceted evaluation instruments that capture both quantitative and qualitative data through metrics. For the "Freedom to Read" VR environment, I developed a survey that established clear success criteria along five measurable dimensions: visual appeal, ease of navigation, informational value, intuitive design, and narrative effectiveness. Each criterion was assessed using 1-5 scales that generated quantifiable data points, allowing for statistical analysis of user experiences. This approach enabled me to establish benchmarks and track improvement over time, with results indicating that 60% of users rated the navigation as "easy" or "very easy" (ratings of 4-5), while 30% found it moderately easy (rating of 3).
The evaluation design intentionally balanced quantitative metrics with qualitative feedback through open-ended questions that prompt users to describe specific experiences, such as technical difficulties or navigation challenges. As one survey respondent noted, "While I think the space does a great job of showcasing different books that are being targeted and focusing on the history of intellectual freedom and freedom of speech, I do think more could be said about the challenges. Who are the groups bringing the challenges? What are the challenges? This is a crucial part of this conversation." This feedback directly informed targeted improvements to the information architecture of the timeline room and historical context sections, demonstrating how my evaluation approach facilitated continuous improvement.
The "Children Draw War, Not Flowers" survey further showcased my ability to develop evaluation metrics appropriate to specific project goals and contexts. Beyond standard usability measures, I incorporated criteria specifically designed to assess the exhibit's effectiveness in honoring "the continuing work and resilience of Ukrainian children and librarians while illuminating the ravages of war on Ukrainian cultural heritage and its people." The comprehensive evaluation approach I developed for these projects demonstrates my understanding that effective evaluation must align with program goals, incorporate multiple data types, and establish clear pathways for applying results to program improvements.
2. My research project "Examining the Applications and Benefits of Virtual Reality for San Jose State University's Special Collections and Archives" demonstrates my sophisticated understanding of evaluation methodology design. In this study, I developed a comprehensive multi-method approach to program assessment that combined expert purposive sampling with convenience sampling, creating a research design that maximized both depth and breadth of evaluation data. As detailed in my methodology section, I implemented a strategic two-phase process that began with in-depth interviews of ten cultural heritage professionals with VR experience, followed by web-based surveys shared with implementation teams at these institutions.
This approach exemplifies my understanding that effective evaluation requires gathering data from multiple stakeholder perspectives using complementary methodologies. The interview protocol I developed included targeted questions addressing specific evaluative criteria, such as: "In what capacity has virtual reality been the most impactful at your institution and why? How has it been the least impactful?" and "Overall, do you believe the investment in virtual reality is worth it? Can you describe the costs and the returns?" These questions were specifically designed to elicit both quantitative assessments (returns on investment, usage statistics) and qualitative insights about implementation challenges and best practices.
My methodology also recognized the importance of collecting concrete outcome data, as I noted: "Data such as ticket sales, attendance records, educational outcomes, online visits, etc. would be beneficial to the study if it demonstrates the impact or lack of impact VR has on the cultural heritage institution." This focus on measurable outcomes demonstrates my understanding that effective evaluation must extend beyond subjective impressions to include concrete, quantifiable metrics of program success. The structured analysis approach I outlined for processing this data—identifying themes, systematic coding, and cross-institutional comparison—further demonstrates my competency in designing rigorous evaluation frameworks that produce actionable results.
3. My work on developing the Slover Library strategic plan illustrates my ability to implement comprehensive program evaluation across an entire organizational context. This project required designing evaluation frameworks for multiple strategic goals simultaneously, demonstrating my capacity to develop tailored assessment approaches for diverse program types. For each strategic objective, I created a matched set of implementation actions and specific assessment criteria, ensuring that evaluation was embedded throughout the strategic planning process rather than added as an afterthought.
For example, for the objective of improving the library's online presence, I established specific measurable criteria including "an increase of 15% in social media user engagement and website views in the initial implementation of the marketing plan and 20% annually in subsequent years." For adult and teen programming objectives, I designed a QR code-based survey system that would collect real-time feedback on programming quality and relevance. This approach demonstrates my understanding that evaluation instruments must be user-friendly and integrated into service delivery to achieve high response rates.
The evaluation frameworks I designed for the strategic plan also demonstrated my ability to align assessment strategies with organizational capabilities and constraints. For the ESL program goal, I noted that "ESL teachers will meet with library directors as needed and report on student attendance, on the effectiveness of lesson plans, and thus, on meeting or not meeting learning milestones." This approach recognized the need to leverage existing expertise (ESL teachers) in designing appropriate assessment criteria for specialized programming. Similarly, for the GED online program, I noted that "graduation rates and student attendance will be recorded and monitored closely" as key metrics for program effectiveness, demonstrating my ability to identify appropriate evaluation metrics for specific program types.
My work on this strategic plan showcases my ability to design comprehensive evaluation systems that balance immediate performance assessment with long-term strategic improvement. The five-year evaluation timeline I established, with specific benchmarks and assessment points throughout the implementation period, exemplifies my understanding that effective evaluation must be ongoing and iterative rather than episodic.
The evidence presented demonstrates my ability to design and implement comprehensive evaluation frameworks using measurable criteria across diverse information contexts. From creating specialized assessment instruments for innovative virtual reality projects to developing organization-wide evaluation strategies for traditional library services, I have consistently demonstrated a sophisticated understanding of both the principles and practical applications of program evaluation.
In my future career, I will continue to apply these evaluation competencies to ensure that information services are continuously assessed and improved based on measurable criteria. I plan to explore more sophisticated evaluation methodologies, including randomized controlled trials for program assessment and longitudinal studies that track the long-term impact of information services on user communities. I am particularly interested in developing more effective methods for measuring the transformative impact of information services on user learning, personal development, and community engagement—areas that have traditionally been challenging to assess with concrete metrics.
To remain current in evaluation practices, I will utilize professional resources such as the Association of Research Libraries' Statistics and Assessment program, the Public Library Association's Project Outcome toolkit, and the American Evaluation Association's professional development workshops. I will also explore emerging methodologies for assessing digital services through publications like the Digital Library Federation's Assessment Interest Group white papers and the Journal of Web Librarianship's evaluation-focused special issues. These resources will help me continue to develop robust evaluation frameworks that combine traditional assessment metrics with innovative approaches to capturing the evolving impact of information services in increasingly complex digital environments.
Applegate, R. (2013). Practical evaluation techniques for librarians. Libraries Unlimited.
Hernon, P., & Altman, E. (2010). Assessing service quality: Satisfying the expectations of library customers. American Library Association. https://doi.org/10.1080/00049670.2011.10722558
Rosenbaum, L. G. (2019). Strategic planning for information organizations. In S. Hirsh (Ed.), Information services today: An introduction (2nd ed., pp. 295-308). Rowman & Littlefield.