Learning Outcome 1 Results

Learning Outcome 1

Build knowledge of human institutions, sociocultural processes, and the physical and natural world through the study of the natural and social sciences, technologies, mathematics, humanities, histories, languages and the arts. Competence will be demonstrated for the foundational information in each subject area, its context and significance, and the methods used in advancing each.


2020-2021 University-Wide results

Figures from Assessment

The results below show how students scored on the three criteria for knowledge in Learning Outcome 1.

The GER and Core committed completed the assessment during regularly held meetings in Fall 2021. We shared this report at a Spring 2022 faculty senate meeting.

GERC Assessment Report_AY 2021-2022.pdf

Summary

What We Learned and Where We Go From Here

  1. Overall Takeaway/Strength: As depicted in the graph on the previous page, according to GERC faculty scorers, the majority of students who participated in this assessment provided responses complex in terms of significance and clarity. Application for LO1 provides the most significant opportunity for growth. All GER Faculty should continue to highlight how their course concepts can be applied.

  2. Faculty engagement with the assessment was not as high as we wanted. Better methods for communicating with GER faculty needs to be addressed. We aim to involve 10% more faculty in the assessment process in the future. Some of our plans for future outreach include personal invitations, representatives to college level meetings, and maintaining consistency with our approach.


  1. Future Plans: Faculty involved in the GER assessment process will meet over the summer to workshop the results and determine how best to communicate it back to the faculty so all of us can improve our classes. After assessment is completed on all four learning outcomes we will have a summit year in which we will discuss Faculty Engagement in the assessment process.

Faculty Participation

  • A faculty-oriented learning outcome 1 workshop was held on August 17 & August 18, 2021. Prior to the workshop, student responses without identifiers were shared with the faculty involved. The workshop oriented faculty to student responses.

  • A total of 60 faculty participated in the learning outcome 1 assessment. 182 courses did not participate. This is a participation rate of about 26%.

2019-2020 Pilot results

Figures from Pilot

The results below (figure 1) show how students scored on the three criteria for knowledge in Learning Outcome 1. Scoring was found to be consistent among the 3 faculty readers. These results will be used in engaging Learning Outcome 1 faculty in August 2020.

Our workshops will be a space for faculty:

  • To observe sample scored responses and the rubric.

  • Identify strengths and areas of improvement in the samples.

  • Share new, modified, or trusted learning designs.

The result of this workshop will be included in next year's report.

Summary of Pilot

The sub committee created four goals for the assessment pilot.

    • Determine if the pilot is aligned.

    • Reflect on the process

    • Affirm value in the process

    • Envision next year's goals

We generated the following questions to write this report:

Are the rubric and assignment connected to the Learning Outcome helping us collaborate in generative and meaningful ways?

  • YES

What moments in the assessment this year were challenging?

  • Implementation and retreat participation were challenging because of COVID-19 disruption.

  • Form design to correlate answers did not create "just in time" snapshot results; criteria in figure 1 had to be redone by hand.

What helps us ensure an authentic student response for learning?

  • Faculty consensus on the signature assignment:"peer" needs to be more specific

  • Collaborating with eCampus and the registrar to limit the amount of outside course context communication to a student;

  • Faculty consensus on how to value the signature assignment as complete/incomplete.

What should we do differently or stress as we scale the process in August?

  • Design authentic, multiple peer profiles

  • Double check the form organizes the data that is the most efficient

  • Design a landing page for student questions, support, and engagement

Narrative of 2019-2020 General Education Assessment Outcome 1 Pilot

PEOPLE, PARTS, and PLACES: 6 Faculty from the General Education Requirements Committee met bi-monthly to design both a rubric and a signature assignment for Learning Outcome 1. Our signature assignment asks students to explain a knowledge concept to a peer. We can assess the clarity, significance, and application or methods the student's explanation. Students enrolled in multiple sections of Writing 111X, [ 200X], and [math course] were assigned a signature assignment in Spring 2020. At a 3 hour virtual workshop in May, this same faculty team discussed sample sets from these courses. Independently, Drs. Zoë Jones, Sarah Stanley, and Latrice Bowman then analyzed 100 random samples and tabulated their scores. In late June, we scheduled a socially distanced retreat to determine if our design met our primary goal: The rubric and signature assignment were found to be aligned for university-wide use in fall 2020.

PEOPLE and PRODUCTS: This report reflects fewer General Education courses than anticipated initially. COVID-19 interrupted the spring semester, and 50% of the faculty pilot were able to pilot the design. One faculty member on our team also left UAF for a job elsewhere and two other members were unable to make the June retreat. Figure 1,2, and 3 were created by Dr. Bowman who needed to rework the scoring data for a visual representation. Future results will use a form designed by or approved by an eCampus designer.

PROCESSES and PURPOSES: At the retreat, we affirmed the strength of our assignment. We noticed how it can elicit from some students a "nuts and bolts" response. That is, some answers listed terms or re-stated basic definitions rather than offering a more extended explanation or one that anticipate a peer without the generalized knowledge of the course. In general, we named a patterns of response that was more institutionally-or instructor-oriented than peer-oriented. We agreed that the relationship to a "peer" for students needs to be more defined, especially because we want to learn from how students share General Education knowledge for their communities.

PROCESSES AND PEOPLE: Our design principle is to limit as much as possible the additional communication to individual faculty or students. We agreed to use as much automation as possible and to rely on email to trigger the process. We reached out and included in our retreat two eCampus collaborators. Thank you Nathan Feemster and Madara Mason. We also agreed and to continue this process of coming together and reflecting on the big ideas behind General Education. The process should be as automated as possible in terms of the procedures for students and General Education faculty.

POWER and PEOPLE: We must reach full consensus as a faculty to grade these signature assignments as complete/incomplete. Students automatically receive either full points once they have turned in their response on time or no points because the assignment is ignored, late, or missing. Transparency in grading practice for all parties and helps faculty and students stay focused on acknowledging the work that goes into participating in a healthy and thriving learning ecology.

FUTURE: To prepare for the scaling of Learning Outcome 1 in the fall, the newly appointed faculty fellow in assessment & accreditation is collaborating with campus units to create multiple peer profiles, improve automation of data collection, and improve communication for multiple audiences, with a focus on a landing page for students to share why their participation is appreciated and contribute to their success. Process is here.