The audience of this evaluation plan will be senior-level stakeholders and funders of NYU career services. They include:
Linda G. Mills, Vice Chancellor for Global Programs & University Life
Gracy Sarkissian, Executive Director, Wasserman Center
Deans and Executive Directors of individual NYU school career and professional development services
Program funders
Usability: What's working well in our design? Are learners taking the appropriate amount of time to complete activities? Where are learners getting stuck?
Conceptualization: Should the design achieve our stated learning goal of improving business communications and outcomes of increasing interview offers?
Learner effectiveness: What knowledge, skills, attitudes did learners learn? Has knowledge transferred? Have learners applied what they have learned?
Program effectiveness: Are more students participating in our training? Did career outcomes for international students improve?
We will take several approaches to measure usability:
We will conduct an observational usability study with test learners as a formative evaluation on usability. We will record our study to refer back to, to identify patterns in behavior.
We will conduct a semi-structured interview by asking post-testing questions as a formative means to understand learners' reactions to the tool. Question examples include "What was the experience like interacting with the AI?" and "Did you feel like the effort to craft the messages you were prompted to write was too much or too little or just right?"
We will use several approaches to evaluate learning outcomes:
Our learning design will incorporate quizzes throughout and results summaries on the AI practice portion for formative evaluation.
Mentors will provide feedback to students after mock interview sessions and in networking events.
We will use Kirkpatrick's Training Model to evaluate learning outcomes in students' reactions, learning, behaviors, and goals.
We will enter usability results and feedback into a shared Miro board and code our data for themes. We will track and analyze qualitative results in shared Google sheets.
This evaluation plan is both formative and summative. We will conduct user observation to ensure the product provides a smooth and easy interaction for users. While assessing learning outcomes, questions are based on Kirkpatrick's Training Model that evaluates four levels: reaction, learning, behaivor, and results. By answering these questions, we will form improvements to our learning design and content, and measure program effectiveness and impact.
We will launch in beta to 60 initial users and use this period. We will use Asana to report on bugs and any results below benchmark, and use an agile approach to implementing improvements to our learning design. We will launch for general availability following this period of initial testing. After receiving our 6-month survey results, we will submit a formal report on our findings to our VPs and funders.