The Schedule

 

Day One

Thursday, 17 August 2023

Morning Coffee Service Provided by Springshare

9:00 - 9:15

Opening Ceremony

Mary-Jo Romaniuk

10:00 - 11:00

Abstract:

Are you looking to enhance your statistical and evaluative methodologies for specific library services and resources? Do you want to learn how to effectively use data and other evaluative measures to support and make operational decisions? Look no further than this informative workshop, led by an assessment coordinator in a renowned research university library in the U.S.

During the workshop, the presenter will draw upon her extensive experience and expertise to guide participants through various examples of her own assessment projects and provide tips on how to select appropriate methods and analyses. She will also share her views on the importance of independent assessments for evaluating library programs and services and supporting evidence-based decision-making for the success of students and faculty.

 

The presenter, whose assessment projects have already been published in national and international journals, will use these examples to highlight the goals, statistical analysis and methods used, and other relevant references of each project, enabling participants to gain a comprehensive understanding of the topic. Additionally, throughout the workshop, the presenter will actively engage with participants by discussing their experiences when involved in statistical analysis and methods for their projects. By the end of the workshop, participants will have gained practical skills and insights that they can apply in their own work through real-life examples shared by the presenter. They will have a unique opportunity to expand their knowledge and improve their library practices.

 

Goals:

The goal of this workshop is to help participants improve their statistical and evaluative methodologies for specific library services and resources. Additionally, the workshop aims to teach participants how to effectively use data and other evaluative measures to support and make operational decisions, and to advocate the importance of independent assessments for evaluating library programs and services. By the end of the workshop, participants should have gained practical skills and insights that they can apply in their own work.

 

Outcomes:

Abstract:

A library website serves as the gateway to a library's resources and services. To provide a satisfactory experience for users, it is crucial that the website is user-friendly, accessible, and efficient. AI tools like ChatGPT can be effective in evaluating and improving library websites.

 

According to Jakob Nielsen, users tend to ignore web content that is complex or difficult to understand. Therefore, readability is an essential indicator to measure the complexity of the website's text. ChatGPT can analyze the website's text to determine its readability score and provide suggestions for improving the clarity and conciseness of the text.

 

Another crucial factor to consider is mobile-friendliness. With an increasing number of people accessing the internet through their mobile devices, library websites must be optimized for mobile users. ChatGPT can evaluate a website's mobile-friendliness and suggest improvements such as adjusting font sizes, optimizing images, and improving the site's overall responsiveness.

 

Accessibility is also a critical aspect of a library website. Libraries strive to provide equal access to their resources for all users, regardless of their physical or cognitive abilities. ChatGPT can review a website's code to identify any accessibility issues and suggest improvements. This includes ensuring proper use of alt text for images, labeling of form fields, appropriate use of headings, and code for keyboard element accessibility.

 

Finally, ChatGPT can evaluate a website's performance and suggest ways to improve it. Slow loading times and inefficient code can negatively impact the user experience. ChatGPT can identify areas of the website that are causing performance issues and provide recommendations to improve it, such as optimizing images, minifying code, and reducing the number of HTTP requests.

 

In conclusion, ChatGPT can be a valuable tool for libraries looking to evaluate and improve their website. By analyzing content readability, mobile-friendliness, accessibility, and performance, libraries can make adjustments to provide a better user experience. A user-friendly, accessible, and efficient library website can be an essential tool for many people in their quest for knowledge.

11:10 - 12:20

Workshop:  Nominal group technique (NGT) – a Focus Group Method for planning and brainstorming

Colleen Cook

Abstract:

Focus groups are often used to gather data in qualitative research.  One key to successful focus group facilitation is creating group cohesion.  This can be difficult for several reasons:  e.g., if subjects have strong diverse opinions or if there are power dynamics which may prevent group members from freely offering divergent views. Nominal Group Technique or NGT is a focus group technique that can be used when group cohesion might be difficult to create.  The method gives each participant equal standing and input.  The workshop will explain the theory and process behind NGT and participants will engage in a simulated NGT focus group. 

Abstract:

In this presentation, the essential components and procedures involved in implementing the Conspectus and Checklist methodologies for processing the library’s collection will be explored.

 

The presenter will delve into crucial topics such as the responsible parties for conducting the assessment, the frequency of assessments, and the data collected for quantitative and qualitative analysis. Attendees will also learn about the available tools for conducting the assessment and how the collected data is evaluated for suitability. Furthermore, the presentation will examine the positive impacts of the assessment, emphasizing how it supports students' academic success, promotes prudent library budgeting, and assists academic departments in achieving accreditation.

12:30 - 14:00

(12:30 - 2:00 PM)

Lunch Break - Provided by Utrecht Library or Sponsor

14:00 - 15:00

(2:00 - 3:00 PM)

Abstract:

Librarians around the world are proudly user-focused in making decisions about collections, services, and programs for the patrons that visit our physical libraries and our web-based collections. Assessment activities are a critical aspect of the work we perform, and most of us have some experience with managing surveys and interpreting findings. Conducting surveys is one of the most common ways we collect data on patron needs, through directly mailed letters and through links we provide to surveys on our web sites and in emails to groups. But do we always know whether our approaches are strong enough to make effective decisions based on the results of our surveys?

 

The goals of this presentation are to provide guidance on best practices for different types of sampling--ranging from convenience sampling to random sampling to stratified sampling--in order to strengthen the validity of findings and create a path for generalizability to the populations we serve. Issues that arise from different types of sampling techniques, such as confirmation bias and maturation effects, will be discussed with suggestions on how to limit their influence on respondents. In order to estimate the strength of the sample, statistical techniques such as Confidence Levels (CL) and Confidence Intervals (CI) will also be discussed with hands on examples of how to compute them. As a way to increase the reliability of the surveys we produce, models such as Inter-rater Reliability (IR) and Intraclass Correlation Coefficient (ICC) Analysis will also be introduced. Although most of our surveys focus on descriptive statistics, in cases of inferential statistics, Effect Size will also be presented as a way to interpret the strength of particular variables about which we are asking our patrons to respond.

 

Although these terms might seem foreign to some of the participants in this session, the underlying theme of this presentation will to be to demystify these techniques. I will introduce them in ways that make sense to people who do not have a background in statistical procedures, and a major goal will be for people to walk away from this presentation feeling confident about the new knowledge they may have learned that will help them in their daily work lives.

Abstract:

The goal of the presentation is to give an overview of the practice of the National Library of Estonia in collecting, evaluating and using statistics and qualitative measures for setting up the library (services) in a new place for the period of extensive renovation.

 

The National Library of Estonia (NLE) is a legal person in public law who operates pursuant to the National Library of Estonia Act. The mission of the National Library of Estonia is to preserve our cultural heritage and to be a bridge between people and knowledge. For the last five years we have been developing a service-based organisation which has helped us to understand the values we offer to clients and develop services in cooperation with clients.

 

The last two years have been full of changes for the NLE. Our own library building is under extensive renovation and is planned to re-open in 2026, offering a wide range of novel, user-centered services. Currently the NLE is operating on temporary premises ten times smaller than our own building, the majority of collections are packed without access for users. We set an objective to continue offering all our services despite the much smaller physical space. In introducing the changes, our priority was continuous and consistent measurement of performance indicators that help to understand the quality of services offered and the expectations of users.

 

In my presentation I am going to describe the main steps and activities in setting up and measuring a complex of reliable, correct and comparable data which includes library statistics, and on the other hand key performance indicators (KPI) which are used for the effective management of the services and the whole library.

 

The library’s management system stipulates that the review of the library’s KPIs and results of public services is carried out every four months. While starting to build the new library environment, we decided to increase the frequency of monitoring, measuring and evaluating library visits, user registration and activity, and providing of public services – thus we measured them monthly.

 

The quality of services and the quality of service providing has been monitored for nine years by the Net Promoter Score (NPS) method whose result last year decreased from 89% to 83%.

In 2022 the NLE started to use SAS Visual Analytics for reporting, data exploration and analytics. This data visualisation tool enables users to understand and examine patterns, trends and relationships in data, in order to make analysis and draw conclusions.

 

The presentation shows performance indicators of services by months and four month periods, compares the data of recent years and evaluates trends related to the changed environment.

Frequent measuring has supported different operational decisions like improvement of communication, higher efficiency of processes and designing of future services.

 

We have defined that measuring and evaluation are necessary for:


15:00 - 15:30

(3:00 - 3:30 PM)

Coffee Break - Provided by Utrecht Library or Sponsor

15:30 - 16:30

(3:30 - 4:30 PM)

Abstract:

While there are a myriad of tools that can be used to analyse quantitative data, computer-aided qualitative data analysis software (CAQDAS) seems less common in the library world. In this presentation, Paul will provide an overview of how one such software package, NVivo, can be used to help analyse the results of qualitative data gathered as part of an online survey.

 

We’ll discuss the challenges of working with and making sense of a large amount of text. We’ll then take a look at how NVivo “understands” qualitative data gathered during an online survey, and how you can quickly code written responses into categories and themes that can be used to help you understand what your patrons are trying to tell you.

 

This will be a show-and-tell presentation, but by the end, participants will have an understanding of how NVivo can be used to make sense of a large number of qualitative survey responses. Participants will understand what is meant to code data, organize those codes into themes, and how to slice and dice this information by the demographic, or classifying, data that is also collected by an online survey tool. This presentation should be of interest to all types of libraries.

Abstract:

Academic librarians spend considerable time and effort planning and delivering

instruction/presentations and providing research consultations on their campuses. In Canada, the number of presentations to groups reported in the Canadian Association of Research Libraries (CARL) Statistics (2018-2019)1 ranges from 197 sessions to 2,326 sessions delivered to 8,984 to 68,644 participants, respectively. The provision of research consultations is a standard service provided by academic librarians but not currently reported to CARL. In the fiscal year 2021-2022, librarians at my Library (University of Alberta) delivered 1,428 research consultations (mostly one-to-one interactions) to undergraduates, graduate students, and faculty members. This work is significant in meeting curricular and research needs across campus. However, in times of financial crisis, there are many demands on library staffing resources to meet well-identified long-standing services and potential new initiatives. Effective collection, gathering, and use of statistics and other evaluative tools can help managers make effective decisions and advocate for additional resources. Data describing impact can also highlight your impact on key stakeholders who can be potential advocates to support requests for resource investment.


In the session, I will highlight several tools libraries can use to describe, highlight, evaluate, and advocate for resource allocation to support teaching and research consultations. I contextualize the presentation by centering it around an exercise undertaken in the Faculty Engagement Unit for the Social Sciences & Humanities at the University of Alberta and projects underway to re-design basic information literacy instruction and evaluate the research consultation from the librarian and user perspectives.

 

The context for Session Organization

Our University has undergone several years of significant budget cuts, and the Faculty Engagement unit was looking at a substantial reduction in staffing due to Library reorganization. This unit serves the Faculties of Arts, Education, Business, Law, and our French Language Campus, separately located in Edmonton, a significant distance from the main campus. Currently, librarians support several subject areas that align with their academic background or skills (e.g. music knowledge, languages). This approach results in uneven labour distribution because of shifting needs that depend on many uncontrollable factors resulting in some librarians undertaking proportionately sessions than others. This model would no longer be sustainable with the knowledge about upcoming staffing reductions. We needed to find a way to distribute the labour better and continue to recognize developed subject expertise.

 

Using a combination of curriculum mapping, analysis of five years of instruction and consultation data, and review of enrollment data, we will move to a model where small teams of librarians cover broad ranges of subject areas.

 

16:30 - 17:00

(4:30 - 5:00 PM)

Day One Closing Remarks

Mary-Jo Romaniuk, Bella Gerlich

 End of Day One - Dine around or dine on your own

Day Two

Friday, 18 August 2023

Morning Coffee Service Provided by Springshare

8:30 - 9:30


Abstract and Goals:

This session will present the program logic model as a method for planning, implementing, monitoring, and evaluating library services and then discuss how such a program logic model supports evidence-based decision-making, value reporting, and advocacy for resources.

 

The session will draw on the presenter’s experience with using and also training others to use logic models, using the W.K. Kellogg Foundation Logic Model as an exemplar. 

 

Three cases will be presented to evidence the power of a program logic model in different situations:

 

The presentation will conclude with discussion of challenges faced in using program logic models for planning and evaluation and approaches to overcoming barriers to successful implementation.

 

After the session, attendees will be able to:


Abstract:

With an estimated 16% of the population having a disability, it is vital for all types of libraries that accessibility and disability inclusion are at the forefront of their work. One important step libraries can take towards this goal is evaluating electronic resources, websites, and other digital content for accessibility, particularly when paying for that content. Not only is this essential so that all community members can access the library, thereby maximizing the library’s impact and value to the community, but it is also crucial in ensuring that these licensed or purchased platforms are widely used. This session will give attendees the tools necessary to start evaluating electronic resources and other digital content to best allocate resources and to accelerate use of these materials.

 

This workshop will start with an introduction to the topic with the facts and statistics needed to make the case for integrating accessibility evaluation into the assessment and evaluation work done at your institution. After learning about the accessibility standards that apply to digital content, participants will learn how to interpret one of the standard tools used to track the accessibility of electronic products, the Voluntary Product Accessibility Template (VPAT). VPATs are used by many vendors of electronic resources to share information about the accessibility of their products and to track changes to both the platform and the related accessibility. Understanding these VPATs is the first step in creating an accessibility evaluation workflow that fits each library’s needs.

 

Next, participants will learn how to evaluate the accessibility of digital content and the accuracy of the VPATs. Using freely available automated testing tools and questionnaires developed by librarians for librarians, we will learn how to verify claims of accessibility and ensure that content is inclusive and usable for all members of the community. The workshop will also cover how to combine these components to complete a comprehensive evaluation of the digital content purchased and licensed by the library and offer examples of how libraries have used this information to increase the value they offer to their patrons. The goal of this session is to leave participants with the tools necessary to critically evaluate their digital collections and increase the impact of their subscriptions.

 

Outcomes:


9:40 - 10:40


Abstract

Introduction

Over the past two decades, scholarly publishing has made a large shift towards Open Access, where research publications are made freely available to all rather than put behind paywalls to be bought back by university libraries. Large funding bodies such as NWO and ERC now require all output of funded research projects to be OA, and many universities in the Netherlands and Europe have formulated similarly ambitious goals. For researchers, this has led to new and additional considerations when it comes to deciding where to submit their research: traditional impact-based measures of journal quality now have to be weighed against the availability (and affordability) of OA options. For libraries and faculties, it means navigating the many ways of financing OA, which differ between journals and publishers. This requires balancing questions of academic values such as scientific quality and equity with an understandable desire for maximal financial efficiency.

Of course, researcher choices and library (financial) policies feed into each other. One factor complicating this feedback loop is that neither side is completely transparent to the other. On the one hand, many OA-related costs are invisible to the researcher (for instance, the money spent by library consortia on discount deals with publishers). On the other, publication data such as obtained from a CRIS does not include any details on how these publications were financed, making it hard to see to which extent financial considerations influenced the researchers' choice of publication venue.

As a result of these factors and more, it is difficult for libraries and the research communities they support to gain insight into the true cost of Open Access, and, accordingly, the extent to which 100% OA is attainable at a particular institution.

In this presentation, we show how a more complete and insightful picture of Open Access publishing practices and costs may be obtained by combining data from different sources, particularly financial monitoring, CRIS publication data, and library consortia discount data. Based on two recent case studies, we show how the new insights gained in this way are being used to underpin and finetune Open Access policies at Utrecht University.

The presentation will be particularly relevant to staff of research libraries with an interest in academic publishing. However, we are aiming to keep it mostly jargon-free and accessible to anyone who would like to be inspired to use their existing data monitors in more creative ways.


Case study 1: 'APCs in the wild'

Each year the library identifies invoices regarding publication costs in the university invoice management system (SAP) and records the corresponding publication data. Combined with other data this enables us to connect the university output to money streams in different ways. This project has led to more insight in the 'unofficial' ways researchers fund their OA articles and the largely unmonitored money streams this involves. There is potentially a lot to gain in terms of financial efficiency by centralizing these costs.


Case study 2: OA analysis at the Faculty of Science

By combining publication data from CRIS with OA status from Unpaywall it was possible to map the OA status at the UU Faculty of Science. A dive into this publication data, with SCOPUS data, allowed us to differentiate between corresponding authors affiliated with UU and not. This approach showed that UU corresponding authors are publishing 23 % more Hybrid-Gold Open Access than non-UU authors. These findings combined with data on payment ('APCs in the wild') suggest that OA is the preferred route when financial support is available. 


10:50 - 11:20

Coffee Break - Provided by Utrecht Library or Sponsor

11:30 - 12:00

Closing Remarks / End of Program

Mary-Jo Romaniuk, Bella Gerlich, Matthijs van Otegem

12:00 - 13:00

(12:00 - 1:00 PM)

Lunch on your own

13:30 - 15:30

(1:30 - 3:30 PM)

Visit Special Collections, Utrecht Univ. Library (Guided travel by public transport)

16:00 

(4:00 PM)

Tour of Utrecht Public Library (Meet at the Public Library)