The audience of this evaluation report are senior-level stakeholders and funders of NYU career services. They include:
Linda G. Mills, Vice Chancellor for Global Programs & University Life
Gracy Sarkissian, Executive Director, Wasserman Center
Deans and Executive Directors of individual NYU school career and professional development services
Program funders
Evaluation issues
Usability: What's working well in our design? Are learners taking the appropriate amount of time to complete activities? Where are learners getting stuck?
Conceptualization: Should the design achieve our stated learning goal of improving business communications and outcomes of increasing interview offers?
Learner effectiveness: What knowledge, skills, attitudes did learners learn? Has knowledge transferred? Have learners applied what they have learned?
Program effectiveness: Are more students participating in our training? Did career outcomes for international students improve?
Description of the instruction being evaluated
Our design focus on two parts. The first part is an E-learning course that covers four models that are closely related to the communication of the job search process for international students. In the second part, we incorporated cutting-edge AI technology to not only evaluate students' learning outcomes from taking the course but also help them to practice by providing constructive feedback.
Methods:
We took several approaches to measure usability:
We conducted an observational usability study with test learners as a formative evaluation on usability. We recorded our study to refer back to, to identify patterns in behavior.
We conducted a semi-structured interview by asking post-testing questions as a formative means to understand learners' reactions to the tool. Question examples include "What was the experience like interacting with the AI?" and "Did you feel like the effort to craft the messages you were prompted to write was too much or too little or just right?"
We used several approaches to evaluate learning outcomes:
Our learning design incorporates quizzes throughout and results summaries on the AI practice portion for formative evaluation.
Mentors would provide feedback to students after mock interview sessions and in networking events.
We used Kirkpatrick's Training Model to evaluate learning outcomes in students' reactions, learning, behaviors, and goals.
Participants:
Our three observational testing and interview participants are all international graduate students at NYU who want to find jobs in the U.S.
We conducted the test and interview in person and via Zoom.
Instruments used to gather data:
We entered usability results and feedback into a shared Miro board and coded our data for themes.
We tracked and analyzed qualitative results in shared Google Sheets.
Feature #1: E-learning course
Method #1: Affinity Map
After interviewing, we compiled the affinity map via Miro. We analyzed and categorized the data into several themes to get insights and inform further the changes that we need to make for our design.
Limitations:
Lack of time and without the help of subject matter experts (SME), we didn't develop a full prototype with abundant content, which resulted in some confusion while users interacted with the prototype.
Without complete content in the prototype, it's hard for users to have the full-stack learning experience, where they actually read and comprehend the learning materials.
If given more time, we would contact an SME and complete a prototype with content. In that way, our evaluation could better reflect not only the usability but also the learning materials.
Modification:
Incorporate visuals into design
Simplify pages further
Incorporate additional interactive scenarios where possible
Modification:
Incorporate more real-world examples and mock scenarios
Modification:
Incorporate more knowledge checks
Method #2: Video Observation
We observed how participants walked through and learned on our E-learning course and interacted with the AI evaluation tool via the video observation method, and coded the data about their utterances when using our design.
Limitation: Lack of time and without the help of subject matter experts (SME), we didn't develop a full prototype with abundant content, which resulted in some confusion while users interacted with the prototype.
Modification:
Rebuild the interactive scenario in the "follow-up conversation" module.
Need to add instructions for interactive flip cards in the "networking" module.
Need to add an introduction of what deliverable means and how it works.
Need to refine the wording for some buttons.
Feature#2: AI practice tool
Method: Video observation + Interview
Limitations:
Without the help of engineers, we cannot actually build a prototype linked with ChatGPT. But this could be a great business opportunity.
We have limited knowledge of AI, and the prompts we wrote might not be the most effective/efficient ones.
Modification:
Users don’t know how to use AI appropriately, need an introduction and the templates of prompts to overcome the barrier of entry.
Users hope to use AI in a personalized way.
Based on our analysis of the e-learning course and the AI evaluation tool, we have concluded that there are several recommendations for improving the overall learning experience.
Firstly, we recommend incorporating more visuals into the course design and simplifying pages further to enhance the learners' engagement and comprehension. Additionally, interactive scenarios should be added where possible to encourage active participation in the learning process. It would be useful to include real-world examples and mock scenarios to make the learning experience more relatable and practical.
However, due to the lack of time and expertise from subject matter experts (SME), we were unable to develop a complete prototype with abundant content. As a result, it may be difficult for learners to have a full-stack learning experience where they can read and comprehend all the necessary learning materials. If given more time, we recommend contacting an SME to help complete a prototype with adequate content. In this way, our evaluation could better reflect not only the usability but also the learning materials.
Regarding the AI evaluation tool, we suggest incorporating it as part of the e-learning course evaluation process. A more comprehensive introduction, such as a recorded instruction/demo, would be useful for learners to understand how to use the tool effectively. To help learners' comprehension, a hint feature could be added in the chatbox, which acts as scaffolding in their learning experiences. It would also be useful to save the feedback from AI so that learners can revisit and review it. However, it is important to ensure optimal timing, clear instruction, and a suitable pace of input to the chatbox.
We acknowledge that there are some limitations and challenges that need to be addressed. For example, without the help of engineers, we cannot build a prototype linked with ChatGPT. However, we see this as an excellent business opportunity for future development. Furthermore, we have limited knowledge of AI, and the prompts we wrote might not be the most effective or efficient ones. Therefore, it is essential to continue developing our knowledge of AI to optimize the prompts and enhance the effectiveness and efficiency of the tool.