A Response to "From Design of Everyday Things to Teaching of Everyday People"

posted Jun 27, 2017, 11:08 PM by Austin Bart

I made a comment to a blog post on Mark Guzdial's blog, and I wanted to repost it here. 

Original Post

My response:

Normative, Ipsative, and Criterion-based – how do we measure our success rate in CS Ed? Here, a criterion-based metric has been proposed: no more than 5% of the class can fail. When writing my dissertation, I compared our Computational Thinking to other courses in the “Quantitative and Symbolic Reasoning” bucket (, for a normative comparison. When you wrote your “Exploring Hypotheses about Media Comp” paper, you used an Ipsative assessment to improve your CS1’s dropout rate (“… a course using Media Computation will have a higher retention rate than a traditional course.”).

It’s hard to make an argument that one of these approaches is better than the other. Criterion-based measures are usually formed when you look at many normative distributions, and so aren’t that different in practice. Ipsative can be unsatisfactory and insufficient for making honest comparisons. Looking at it normatively doesn’t always help that much, if we assume that most of our introductory courses aren’t doing a very good job.

But questions of what we’re basing this on aside, does 5% feel like the right number? Currently, we have about a ~10% DFW rate in our CT course. I think we’re nearing the bottom of what we can reasonably do to bring those DFW students to success – most of these are students who stopped working for reasons outside my control or who had medical crises. I’m not sure I could squeeze out another 5% without really lowering my standards for success. And that’s in a non-majors class where my expectations are very different from what I want out of CS majors.

Ultimately, I think my big reaction is that assessment is really, really hard (e.g., Allison’s work) and we aren’t good enough at it yet to really be able to micromanage our pass/fail rates too much. Whatever arbitrary number we choose as success is tied very heavily to how we measure success in the first place.

I am now Dr. Bart

posted Mar 25, 2017, 1:07 PM by Austin Bart

Last Wednesday, I passed my final defense. I am now Dr. Bart. Onwards and upwards!

Content Area Grade vs. Course Component Grade

posted Nov 13, 2016, 3:20 PM by Austin Bart

Something bothered me when designing our last syllabus. The final product looked something like this:

 Course componentPercent of total course points 
 Final Project30% 

This table breaks down the apportioning of points by course component. But this doesn't reflect the goal of my teaching. I am not concerned how my students do "on classwork". I want to know if they understood the actual topics that I derived from my instructional analysis. Ideally, I would be apportioning percentages for content areas:

 Content AreaPercent of total course points 
 Data Structures20% 
 Social Impacts30% 

At some point over the semester, my students must convey to me that they sufficiently mastered the concept of how programs can loop over a list, and then they should receive some points. I want them focused on completing the topics, not "classwork".

This thought was triggered again tonight when I saw a presentation complaining about how Mean grades can be misleading. They offered an example similar to the following:

 Assignment #Grade 
 Assignment 1 95% 
 Assignment 2 95% 
 Assignment 3 95% 
 Assignment 4 70%
 Assignment 5 95%
 Final Average 88% 

They argue that, clearly, assignment 4 was an outlier that should have been removed from the final grade calculation. This student consistently performed above a 90% with only a single exception, and clearly something was wrong then. But if you're using the first syllabus I offered, then perhaps Assignment 4 was the activity on loops. It's not surprising that they would do poorly on this subject, even if they did well on much of the rest of the course content. And yet that topic is weighed as just a simple piece of the final Classwork grade. If it were the lesson on loops, then they should find it very concerning that they don't understand the material, since its so important! I would like the grading scheme to reflect that importance.

Obviously, this should be balanced with mastery-based grading; give them as many attempts as they need, within a reasonable amount of time. Give them the lesson in as many different forms as they need. These thoughts aren't about punishing students for not performing perfectly. It's about getting them to put energy where I feel it is most valuable.

Harry Potter and the Tonal Analysis

posted Sep 24, 2016, 2:39 PM by Austin Bart   [ updated Sep 24, 2016, 2:54 PM ]

Harry Potter and the Sorcerer's Stone has, roughly, 5746 sentences, spread out over 17 chapters. I know this because I have recently tried making a new style of English dataset for the CORGIS project. Previously, we had attempted to create datasets for English majors by analyzing a corpus of books and computing some statistics (difficulty, sentiment analysis, etc.). This new approach would look at a single book, which actually gives us the ability to compute some new statistics.

One of these statistics is particularly interesting: Tone Analysis using BlueMix from IBM Watson. Essentially, this service allows you to get a variety of tonal information for a sequence of sentences. For instance, you can get the estimated "Sad"ness of a sentence as a decimal 0..1 value, along with a number for "Joy", "Fear", "Anger", and "Disgust".

So what happens when you put in the text of the first Harry Potter book?

Unfortunately, that's not particularly helpful. The results of tone analysis fluctuates so much over just a few sentences that we end up with a mess of lines. I believe that the only reason its purple is because that was the last tone drawn.

It is fairly easy to create a Rolling Average with Pandas+Scipy+Numpy. This helps tease out a more useful graph, especially when the tones are standardized.

Suddenly, we can actually see trends! And those trends can be matched up against points in the book (Spoilers!):
  • Around sentence 1000, we see Hagrid confront the Durselys' and the famous "Yer a Wizard, Harry". Hagrid's pretty angry in this scene.
  • Around sentence 2000, Harry is meeting Ron and hanging out on the Hogwarts Express - high Joy!
  • And then around 2500, we head down to the dungeons with Snape and see a massive surge in Disgust.
  • Sentence 3000 is encountering Fluffy and then the Troll, which I suppose accounts for the surge in Disgust and Anger.
  • Briefly after that, near 3500, we can see the spike in Joy thanks to Harry winning the Quidditch match.
  • 4500 puts us in the Forbidden Forest - plenty of Fear and Anger there, apparently.
  • I believe that the surge right after 5000 represents the adventure down to the mirror
  • And the last few hundred sentences of Joy show the ending of the book
Now, it's not a perfect match up, and there are a lot of other trends to explain here. But I did think it was interesting to see how we could pair up important events with peaks and troughs. Unfortunately, all the postprocessing I had to do makes this kind of analysis out of the reach of most of our students without some more help. I have to carefully think about how to make this kind of data available in such a way that its still useful. Dataset creation is tricky business!

Abstract Variables and Abstracting Cows

posted Jun 16, 2016, 2:16 PM by Austin Bart   [ updated Jun 16, 2016, 2:35 PM ]

In the Computational Thinking course, we talk about "Abstraction" as the concrete representation of real-world things using computational tools (although there are other definitions, this is what we focus on). Variables are one way to represent things, but there are others. Lately, I have been considering how different variables have different levels of abstraction, and the implications for pedagogy.

Let's walk through an example. Imagine we have a herd of cows in a field.

This herd could be represented quantitatively using the following table.

 Name Weight
 Bessie  600 
 Abigail 800
 Average 716

At the bottom row of this table, we see an "average" cow. There is no "average cow" in our herd with a weight of 716 pounds. But we understand this idea of an "average cow" as an abstraction.

Now, how would we represent this herd in a BlockPy program?

We now have a variable that abstracts our herd of cows into a list of numbers, which can be easily manipulated. By abstracting our cows, we reduce unnecessary information (their height, their name, their genetic code, how many times they've mooed today, etc.). 

We can create a variable that represents our imaginary average cow.

We cannot see the value of the "average_cow" from this perspective, although we could print out its value. It is a more abstract variable, dependent on time and other pre-existing variables.

We used built-in Python functions to quickly compute that average, but in the CT course, we don't allow students to do so. Instead, they have to use a for-each loop. The necessary code to calculate the average cow weight would be:

This code has 5 variables:
  1. my_cows
  2. average_cow
  3. total_weight
  4. total_cows
  5. a_cow

The variables "total_weight" and "total_cows" are similar to the "average_cow" variable, but they are at a higher level of abstraction than "my_cows" since they do not represent real physical entities.
  • How did we know to create "total_weight" and "total_cows"?
  • How did we know what to initialize them to?
  • How did we know to place them inside the loop to manipulate them?
And what about that "a_cow" variable? It represents each cow, but only over the entire course of the loop. It is not a specific cow until a specific time. To me, this represents an even higher level of abstraction than the other variables.
  • What is the relationship between "a_cow" and "my_cows"?
  • How do I mentally model this abstract cow?
  • How do I keep track of the cow over "time", when "time" is not immediately present in the program as written?
Looking at a completed program, I think many students are not able to recognize the varying levels of abstraction of these variables, and they struggle with writing this all from scratch.

Learning objectives in Computational Thinking

posted Mar 9, 2016, 7:46 AM by Austin Bart   [ updated Mar 9, 2016, 7:52 AM ]

In the past year, I've had a growing interest in formal methods of Instructional Design. One of my new favorite activities is writing learning objectives. I'm still developing that skill, but I though it would be interesting to share some of the objectives I've written for the Computational Thinking course. As you can see, there are a large number of learning objectives, and I doubt we actually cover all of them in the CT course. There's still a lot to improve about the curriculum. There's also a lot to improve about these outcomes. I noticed while reviewing them that there isn't anything about students working with nested control structures, for instance.

  • Algorithms

    • Control Structures

      • State the control structures of algorithms

      • Differentiate between sequence, decision, and iteration [and function]

      • Explain the behavior of nesting control structures inside of each other.

    • Program State

      • Define the concept of a program's state

      • Describe how program state changes with respect to time

      • Trace program execution with sequence, decision, and iteration

      • Relate program state with the inputs and outputs of a program.

    • Decision

      • Identify the condition, body, else-body of a decision

      • Write a numerical condition

      • Write a Boolean condition

      • Write a condition with a logical AND or OR

      • Explain the behavior of commands inside a decision.

      • Solve a problem that requires decision

    • Iteration

      • Identify the iteration property, iteration list, and body of an Iteration

      • Write an iteration over a list of complex structured data

      • Write an iteration over a list of primitive data

      • Evaluate the name of the iteration property

      • Express implicit solutions in terms of explicit iteration commands

      • Explain the behavior of commands inside an iteration

    • Documentation

      • Explain the use, power, limitation, and danger of importing

      • Define documentation for an API

      • Identify the inputs and outputs for a given function of an API

      • Explain how to search for help on an API

    • Reporting

      • Predict the printed output for a string property

      • Predict the printed output for a string

      • Predict the printed output for an integer

      • Predict the printed output for a list

      • Differentiate between storing, importing, printing, and plotting

    • Create an algorithm to solve a problem involving real-world data.

  • Abstraction

    • Types

      • List the types of data used in this course (String, integer, float, Boolean, list, or dictionary)

      • Differentiate between simple/primitive types of data and complex types.

      • Differentiate between a property and a string

    • Real-world vs Code

      • Create an abstraction, including the real-world entity, stakeholder, properties

      • Instantiate an abstraction, including the values for each property

      • Code an abstraction as an integer, string, Boolean, list, dictionary

      • Interpret an abstraction to identify the real-world entity, potential stake-holders, limitations, and potential questions it could be used to answer

    • Property Creation and Manipulation

      • Create a value to print

      • Create a list of values for a plot

      • Append values to an empty list using iteration

      • Manipulate an existing property

      • Evaluate the name of a property for clarity, correctness, and disambiguity

      • Calculate the result of an assignment involving constant expressions.

      • Calculate the result of an assignment involving self-referential expressions.

      • Estimate the result of an assignment involving a function call

      • Evaluate the benefit of storing the result of a call in a property for use

    • Dictionaries

      • Access an element of a dictionary

      • Access a dictionary inside of a dictionary

      • Identify the elements of a dictionary (name of key, type of value)

      • Differentiate between a dictionary and a list

    • Lists

      • Identify the type of a list

      • Explain the difference between an empty list a non-empty list

      • Explain the purpose of the "append" function of lists

      • Identify how to access data within a list

    • Structured data

      • Outline the structure of data

      • Give the path to an element of structured data

      • Relate a path to the needed control statements to access that data (iteration, dictionary access)

    • Analysis

      • Criticize a data set for its limitations

  • Social Impacts

    • In Society

      • Identify the stakeholders, impact, conflicts, and pressures of a scenario.

      • Evaluate a conflict to identify an ethical action to take

      • Describe the pervasiveness of computing for a stakeholder

      • Identify stakeholder pressures and judge whether they are external or internal

      • Describe the privileges (or lack thereof) of stakeholders in relation to computing

      • Describe the privacy rights and expectations of stakeholders in relation to computing

      • Describe the powers of stakeholders' over computing

    • In Your Life

      • Relate computing ethics to your profession.

      • Use ethics to decide on and defend your computing behavior

  • Data Science (Secondary objectives)

    • Plotting

      • Identify the scenarios where you would want to use a given type of plot

      • Differentiate between line plots, XY plots, scatter plots, histograms, bar charts, maps

      • Interpret a plot to answer a question

      • Evaluate a plot for its clarity and beauty

    • Statistics

      • Give descriptive statistics for a list of numbers (min, max, mean, std dev)

      • Explain the value of reporting the standard deviation alongside the mean.

    • Project Management

      • Develop a project iteratively

      • Debug a program that is not behaving

      • Detect the problem in a program

      • Work on a project early, consistently, and in reasonable chunks.

How to Get into CS Teaching from CS

posted Oct 29, 2015, 9:28 AM by Austin Bart   [ updated Jul 20, 2017, 9:21 PM ]

I've decided to try and share some introductory materials for getting into teaching. Mark Guzdial announced that he'd be posting the syllabus for his CS Teaching Methods course, and recently two CS people have asked me how to get into CS Ed research. Clearly this is the hot time to be dragging people into the best CS subfield.

The fact is that there isn't too much published Pedagogical Content Knowledge about how to teach Computer Science. There's a lot of Pedagogical Knowledge out there, and as CS people we already have Content Knowledge. But the community is still establishing what's different about teaching CS compared to, say, teaching someone how to make Cigar Box Guitars.

So most of this page is devoted to good teaching links, and some specialized CS information. At some point I may toss out my own model curricula for a "CS Teaching Methods" course. There'd probably be a lot of theory (Instructional Design, Gange's Learning Events, Motivation, SL Theory, Constructivism, Cognitivism, etc.) and some large amounts of practice.

Formerly (I have mirrored the content:
Hands down the best resource I've ever found for growing as a teacher/researcher/learner is this website. Unfortunately, it went down a year or two ago, for some reason. But you can still access it through the wayback machine!
The site is divided into three sections:
  1. Theories
  2. Practice
  3. The authors' thoughts
I recommend starting in column 1 if you are new to education research, and starting in column 2 if you are eager to start teaching and building curriculum.

Learning Theories

There are many theories related to learning, teaching, motivation, etc. This site gives you a crash course in all of them. It is overwhelming at times, and learning how to navigate these theories is tricky. Over time, you should come to view learning theories as lenses. Different theories are valuable at different times. For example,
  • Situated Learning theory can really help with high level conceptual skills that benefit from a lot of social interaction
  • Constructivism is powerful for describing partially structured learning experiences meant to build on prior knowledge.
  • Behavioralism is great if you have to learn a habit.
There's a lot more out there than just Behavioralism, Constructivism, and SL Theory; although if you stick to reading CS Ed literature, you might think those are all there are. Don't get caught in that trap, but explore as many theories as possible and see how they can help you. It's like Object-Oriented vs. Function programming: different tools at different times.

SIGCSE Proceedings, Journals, Etc.

There is a tremendous amount of research out there about how to teach CS. As a community, we're till trying to distill the best of it out, but you should definitely read some of these proceedings and journals. In particular, look at:
  • ITiCSE
  • TOCE
  • ICER
  • KOLI
  • CCCS

The MUSIC Model of Academic Motivation

I do a lot of motivation research, and the MUSIC model is by far one of my favorite theories. I'm biased, since Dr. Jones is on my committee, but I asked him to be on my committee because it's a good theory, so there you go.

CS Teaching Tips

I don't agree with all the tips that are here. Some of them are quite interesting, some of them are quite useful. But definitely take them with a grain of salt.

Exploring Computational Thinking

Google has a really nice sample curriculum here for teaching Computational Thinking to K-12. I think it's a nice model of what we should be building as a community.

Get Out and Practice

There is no teacher like experience. You can read guides all day, but honestly you're better off getting real world experience as quickly as possible. Teaching is one of those domains where Situated Learning techniques really pay off. So work with a local high school to teach a class or something.

Motivation × Situated Learning

posted Sep 9, 2015, 8:55 AM by Austin Bart   [ updated Sep 9, 2015, 9:00 AM ]

As I write my prelim, I'm re-immersing myself in Situated Learning Theory. I've been obsessed with Instructional Design for the past few months, so I haven't thought much about this alternative educational theory. However, I started using it early in my graduate career for a reason, and those reasons still make sense. I'm using a particular evolution of SL Theory [1] that breaks down the learning process into four components: the context, the content, scaffolds, and the assessment. In the latest iteration of my prelim, I'm crossing this with the MUSIC model of academic motivation to explore how the components provide opportunities to motivate the learners - in particular, what the context and scaffolds bring in.

While working this out, I made the following table to organize my thinking. I thought it was an interesting break-down of where motivation could apply within a learning experience. There's a lot more not captured here in a learning experience, but this is what was useful for my research.
Situated Learning Component:ContextContentFacilitationsAssessment
Example"Game Design""For Loops"Blocks-based environment, variable explorer, teaching assistants, etc.Exams, performance review, code review
eMpowermentAm I restricted by the context to explore what I want?Do I have control over the depth/breadth/direction of what I am learning?Do these scaffolds let me accomplish things I couldn't? Do they artificially restrict me?Do I have the freedom to explore my limitations and successes in this assessment?
UsefulnessIs this situated in a topic that's worth learning?Is the content itself worth learning?Do these scaffolds let me learn enough to still be useful?Do I feel that performing well on the assessment is important?
SuccessDo I believe I can understand this context?Do I believe I can understand this material?Do these scaffolds hinder me or help me?Can I suceed at this assessment?
InterestIs this situated in something I find boring/interesting?Is the material inherently interesting?Do the scaffolds support my interest in the activity or detract from the experience?Am I interested in the assessment experience?
CaringDoes the context give opportunities for the instructor and peers to show they care?Does the content give opportunities for the instructor and peers to show they care?Do the scaffolds give opportunities for the instructor and peers to show they care? Do the peers and instructors have an opportunity to provide support themselves?Does the assessment give opportunities for the instructor and peers to show they care?

This is only a small part of what I'm doing in my prelim overall. For instance, there's all the fancy software I'm writing to fill out the scaffolds column. Still, I think this is an interesting way of looking at the components of the course.

[1] Choi, Jeong-Im, and Michael Hannafin. "Situated cognition and learning environments: Roles, structures, and implications for design." Educational Technology Research and Development 43.2 (1995): 53-69.

Year in Review: My NSF GRFP Fellowship Activity Report

posted Apr 15, 2015, 9:13 AM by Austin Bart   [ updated Apr 15, 2015, 9:14 AM ]

I'm extremely lucky to be funded by the NSF Graduate Research Fellowship Program. One of my obligations to the NSF in light of this is a write-up of my activities for the past year. I thought it would be interesting to post my write-up here. I apologize for its rough quality - I wrote it in one-sitting!

This past year, I have been primarily engaged in creating a new undergraduate course on Computational Thinking meant for non-Computer Science majors, assisting Dr. Dennis Kafura. This course uses an innovative Data Science approach, requiring us to create an entire curriculum from the ground-up. Students get a chance to grapple with data from their own discipline in a collaborative learning environment with an emphasis on mastery learning and authentic learning experiences. In Fall 2014, we ran a pilot course for 20 students from over 10 different majors - in Spring 2015 we have 40 students from over 20 different majors, including a 50%/50% split in gender. The main challenge of this course is scaling up the learning experience to eventually handle hundreds of undergraduates per semester, many of whom will struggle with the material and lack proper motivation - we accomplish this by using innovative pedagogy such as collaborative student cohorts and active learning techniques. As we develop this course, we have an eye towards wide dissemination of the course components even as we try and scale up the material. In addition to a publication at ITiCSE on the pilot offering, we were written up in a local newspaper for the novel pedagogy and technology.

In order to help run this course, I have created a suite of new educational technology resources. The most valuable item is a new block-based programming environment that scaffolds students to write real Python programs - these programs can be converted back and forth freely between the block-based representation and the conventional text-based version. This environment also gives students interactive feedback as they complete assignments, guiding the student through the material and reducing the workload on the instructors. This environment is embedded within a new online learning platform that gives students an immersive experience as they complete coursework - all of their grades and assignments are in one convenient web interface, and they can rapidly switch between classwork and textbook readings. From the instructors perspective, this system tracks individual students' progress across a number of factors in order to keep track of a large number of learners. Finally, I have continued my work for the CORGIS project: the Collection of Real-time, Giant, Interesting dataSets. This project seeks to make high-quality Big Data sets available to novice learners through a simple-to-use interfaces. At present, we have over 3 dozen ready-to-use libraries across a wide-variety of subjects, empowering students in the Computational Thinking class to find a dataset that speaks to their interests and long-term career goals. Students use these libraries through the block-based programming environment and during their final project, where they must answer real-world problems using computational techniques and a datasets of their own choice. All of these resources are free, open-source, and publicly available.

In addition to my work on this new course and its associated technologies, I have been involved in several smaller research projects and professional development activities. Working alongside my fiance in the Animal Science department, we have created an educational modification of Minecraft meant to organically teach Animal Science concepts ("AnimalScienceCraft") aligned with FFA standards for middle schoolers - although still in the early stages and seeking funding, we have already presented this research and garnered a lot of positive interest. I have committed many hours to supporting and hosting events for the Computer Science department's Association for Women in Computing, in my capacity as an officer of the organization - this work is critical to my on-going commitment to promoting women in computing. As a senior graduate student, I have mentored two of my fellow graduate students and two undergraduate students as they conduct research within my ken. At one of the largest conferences in my field (SIGCSE'15, the Special Interest Group on Computer Science Education), I organized a PhD students dinner in order to foster community among the CS Education doctoral community. Finally, I am also serving as Publicity Chair for the education workshop for SPLASH (SPLASH-E'15). I hope to continue many of these efforts even as I continue to develop the Computational Thinking course, conduct informative empirical studies on its participants, and create powerful new educational technology to support it.

Presentation on Practical Self-Regulated Learning by Dr. McCord at CHEP'15

posted Feb 5, 2015, 7:09 PM by Austin Bart   [ updated Feb 6, 2015, 7:27 AM ]

On Wednesday 2/4/15, I attended Chep'15 here at VT.

The 7th Annual Conference on Higher Education Pedagogy is focused on higher education teaching excellence and the scholarship of teaching and learning.
- CIDER website

Although there was a lot of great stuff there, one panel stood out: 

"Integrating Self-Regulated Learning Activities into Your Course or Curriculum"
Dr. Rachel McCord
Self-regulated learning is an important skill set that allows learners to regulate their cognitive, behavioral, and motivational processes in their learning environments. While these skills are important, many students do not possess these skills when entering an undergraduate institution because they have not had the opportunity or have not needed the skills in previous learning environments. This session will demonstrate a number of techniques that can be integrated in existing courses to help develop and support self-regulated learning skills in undergraduate students.

In Computer Science, we have a big problem with self-regulated learning. I've seen so many students confident that they can do the entire project at the last minute, and then have to email the instructor 10 minutes before it's due with basic questions. In theory, students should come to college with these skills - in practice, this is where they learn them.

The slides of the talk are here. I've made my own notes on the most interesting things below. All credit for these ideas goes to Dr. McCord.

How Can I Make an "A" for this Course?

  • In 1-2 pages, students reflect on what they need to do to succeed in this course.
  • Page 9 of the slides gives some sample questions that they can answer (e.g., "What types of tasks will be engaged in during this course?")
  • Students might need to do some homework on this, studying the syllabus or asking the instructor some questions.
  • This activity is given after the first exam, when the strugglers of the course have been determined.
  • It is assessed loosely, possibly with a rubric.

How Will I Complete This Assignment?

  • Before an assignment (possibly before they even leave the classroom or the lab), have them complete a plan for how they would tackle the assignment.
    • This plan can even be overlaid on their own schedules - force them to identify a time when they can work on it
  • For each task, they should write estimates for how long they expect it to take.
  • Instead of saying how long you (the instructor) thinks it will take, show them how long it took students from previous semesters to complete it.

Explicitly Lay out SRL Plan

  • Use Zimmerman's model of Self-Regulated Learning to explicitly act out these three phases: Forethought, Performance, and Self-Reflection
    • Forethought: Spend part of the work session identifying the problem, jotting down ideas, noting where difficulties are
    • Performance: Spend part of the work session working on everything that a student can do individually
    • Self-Reflection: Go to office hours and get help.
  • Page 11 gives an example of this
  • This assignment is required for all students - the rhetoric is that "no matter how good you are, you are still a novice until you graduate."
    • But can be targeted at low-achievers

Student Generated Exam Questions

  • Students are responsible for writing the kind of question they would expect on the exam
  • You can do several different things with the final products:
    • The best question is placed on the exam, so students have an incentive to generate a great question
    • Let classmates/other cohorts complete them during a class activity
    • Have the instructor or a TA complete the problems as best they can, while verbalizing their thought processes (modelling)

Exam Wrapper

  • This was actually my favorite idea!
  • After an exam, rather than just re-answering questions, they have to fill out a table explaining why they got the question wrong
    • Carelessness
    • Unfamiliar material
    • Misinterpreted question
    • Did not complete
  • They can get some percentage of points back by filling out the chart
  • You can see an example on page 16

Letter to Next Class

  • Students write a letter to the next class, telling them what things they should know to succeed
  • Take the best snippets of these to give to the next class
    • Via the syllabus
    • Or via a handout
  • It's a really good reflective activity for students too.


Successful usage of SRL strategies is correlated strongly with success in a course. Can we really modify student's behaviors in order to get the benefits of good SRL? I want to see more research on what implementing these strategies does for students in terms of learning gains.

1-10 of 18