Blog


Thoughts from a Thoughtful TA Staff Meeting

posted Sep 25, 2017, 8:58 AM by Austin Bart   [ updated Sep 25, 2017, 7:55 PM ]

I'm used to being the only person in the room with an angry CS Education rant, so I was pleasantly surprised today when my UTAs all launched into an impromptu discussion about what's wrong with Computer Science Education. Although they were only able to frame their conversation about the CS program here at Virginia Tech, the things they were describing are almost universal. Their observations match up with research. In no particular order, some of the things they talked about were:
  • Many students find assignments decontextualized (inert) and not interesting or useful.
  • Students often feel they are learning too much at once, as a single assignment shoulders many learning objectives.
  • If a course only has a few projects, they don't get the low-level skill practice they need before being thrust in a large-scale project.
  • Many of the courses are (intentionally or unintentionally) designed to be difficult, but student are not supported emotionally all the ways they need.
  • TAs don't always have the training (either of content knowledge or pedagogical content knowledge) to help them.
  • Students aren't satisfied with the curriculum to make them "Computer Scientists", taking many classes on topics like system architecture when they really want to be thinking about User Experience Design.
This is not the entire list, but these are some of the big issues. They may not use the research terminology, but they know what's wrong and they could point out specific examples from their own experiences. These are things that I've heard are problems everywhere, and there are many well-known ways of handling them in the literature. Very few are these "unsolvable", although many of them are solvable only with trade-offs.

The Department Perspective

Of course, I've now seen the other side of this.

I've always found my colleagues in the department committed to improving things when they are able. Most professors want to help students succeed, but professors are humans. They have many expectations for research and service. I wouldn't accuse any of my fellow instructors of being lazy. When most of the reward system is oriented around advancing their research and not the classroom, it's a rational decision to work towards that. What incentives do professors have to "fix" their working curriculum (working, at least, according to all the measures they've seen used before)?

Fixing a course is hard. First, you need to be able to admit that things might be wrong with your course, and then you also need the knowledge of what to fix. You need to find the time to review and revise the material, which can be extensive. Developing new assignments, new lessons, new feedback for students... All of these things are hard. And of course, it will largely be wrong the first time, so you will need time to iterate on it after students encounter it. And don't get me started if new technology is needed!

The Virginia Tech Computer Science department, in my opinion, does seem to want to fix things. They hired Mohammed Seyam and myself in this past year explicitly to work on undergraduate curriculum. There have been many committee meetings by people smarter and more experienced than myself about how to make improvements. My colleagues work hard to find ways to make more students succeed. The department gets a lot of support from the university and the college, and great organizations like TLOS and NLI. But institutional change is hard! You cannot flip a ship on a dime. These problems may be present at VT in one form or another, but I would bet money that we are still way better than many other research institutions.

I certainly don't want to sound critical of such a great program like CS@VT. I would heartily recommend a Computer Science degree from Virginia Tech, and the students we graduate consistently prove to do amazing things. But I think it's helpful to understand what students criticize, and think about what we are working on to improve things.

Improvements Start at Home

This UTA conversation launched from comparing 1064 (my Python course) with how this Python course was previously taught. Some good ideas were tossed around on how to fix things that are still wrong with 1064. In particular, the point was raised that although students are getting a great amount of practice, they are not getting the "bigger picture". Right now, they're writing 1x lines of code, and they need some more experience with 10x lines of code. One of my TAs asked whether I'm open to changing things during the semester - Yes, yes I am! The trick now is building those experiences so that everyone benefits and there's not too much burden on the TAs.

Another point was that my feedback mechanisms were too picky given how vague they are - I put that down more to the fact that this curriculum has only been used once. I think that will improve with time and effort. Right now, I'm moving so fast to create the lessons for the next module, I don't always give individual problems the time and attention they deserve. I know from experience that the more time spent writing clear problem descriptions and writing good immediate feedback, the fewer questions I get during office hours and the less stress students have. But I need to balance the quality over the entire semester, or the tracks will run out under the train. Still, it's great to be getting feedback, and I have many mechanisms in place to record it - TAs have to fill out weekly feedback reports, and students get surveyed very regularly.

Conclusion

Overall, I am pleased with this conversation. The CS community has so much work ahead to fix the situation we're in. But I'm so happy that these undergraduate teaching assistants were thinking and speaking on this matter at length, and of their own volition. I didn't say almost anything the entire time! Hopefully more and more students will feel confident about wanting more out of their CS education, and help support our department's efforts to improve ourselves. The UTAs are "on the front lines" in a way that I don't think we are as faculty, so I think a lot of the grass roots change can start there. Now it's a matter of a empowering TAs, professors, and other stakeholders with the tools, knowledge, and resources to be able to bring the change about.

Course Points over a Semester in Computational Thinking

posted Sep 20, 2017, 7:54 PM by Austin Bart   [ updated Sep 20, 2017, 7:58 PM ]

The Computational Thinking course at Virginia Tech is organized into 7 modules, with an interesting spiral shaped design of the curriculum. Each of those modules can be seen as a collection of classwork activities, homework assignments, reading quizzes, projects, and attendance grades. Each of these are worth a certain number of points towards the overall final class grade. By the end of the semester, if a student has successfully completed all the assignments perfectly, they will get 100% of these points and get an A. Naturally, most students miss a few assignments here and there, but the vast majority of students still manage to get an A.

Recently, in an email discussion with one of the TAs in the course, an interesting question was poised: what is the distribution of points over the course of the semester? I actually answered this question while creating a graph for my dissertation last year. I ultimately removed the graph from my dissertation, but I think it was too interesting to just throw away.

Explaining the Graph

Distribution of Course Points over the Semester
This graph might be a little confusing, so let's break down its parts:
  • The vertical light blue lines divide each module.
  • The thick dotted line is the actual number of points possible a student can get. No student can get higher than that line, since there are no extra credit points.
  • The thin rainbow colored lines were the students from that semester; you can actually see the students' trajectories as they accumulated points in the course.
  • The straight yellow line is an estimated trend line. It doesn't have any actual reflection on reality except to point out that each module adds a roughly consistent number of points.

Graph Results

So from this graph, we can see a few interesting things:
  • The first module ends up being worth 20%, which is as much as modules 2 and 3 put together. I was initially a little anxious about this, but the TA pointed out that the forgiving spirit of the course allows for students to reach out to instructors to make those points back up. So I think it's good that there's a bit of a "wake-up call" for students at the beginning.
  • The middle module of the course, where they work on Python, stands out a little as a long module. However, the slope of the line at that point slows down. This module tends to be a little more difficult and programming-oriented than other modules, and I think we have long thought we need to spend time revising that section to make it more digestible.
  • In the penultimate module, the course slows down as students work on their final project in class. The only real assignments are some progress report submissions.
  • You can see it jump at the end when the final project and final code explanation grades get added.

Conclusion

What did I learn by reviewing this graph? Well, overall, the trend line does not vary too much from the actual points possible - despite some jumps, most of the course is a fairly smooth trip. There seem to be places where we could smooth things out further and make it slightly more consistent. However, I doubt most students would notice such a change.

I think it's nice to see how so many students get most of the points by the end, and they stay bundled up by the top. It is somewhat amusing (disheartening?) to see the few students who flatline and suddenly jump back up as they ask for assignments to be reopened and scrounge up points. The bulk of the students seem to stay in sync with the assignments, rather than trying to complete everything at the end of a module.

I am surprised that this graph came out as smooth as it did, since I wouldn't say that we intentionally laid out assignments with this graph in mind. I would be very interested in seeing this kind of analysis performed on other courses. I will certainly be doing so for my Python course, down the road. 

A Response to "From Design of Everyday Things to Teaching of Everyday People"

posted Jun 27, 2017, 11:08 PM by Austin Bart

I made a comment to a blog post on Mark Guzdial's blog, and I wanted to repost it here. 

Original Post

My response:

Normative, Ipsative, and Criterion-based – how do we measure our success rate in CS Ed? Here, a criterion-based metric has been proposed: no more than 5% of the class can fail. When writing my dissertation, I compared our Computational Thinking to other courses in the “Quantitative and Symbolic Reasoning” bucket (http://imgur.com/a/fUxsQ), for a normative comparison. When you wrote your “Exploring Hypotheses about Media Comp” paper, you used an Ipsative assessment to improve your CS1’s dropout rate (“… a course using Media Computation will have a higher retention rate than a traditional course.”).

It’s hard to make an argument that one of these approaches is better than the other. Criterion-based measures are usually formed when you look at many normative distributions, and so aren’t that different in practice. Ipsative can be unsatisfactory and insufficient for making honest comparisons. Looking at it normatively doesn’t always help that much, if we assume that most of our introductory courses aren’t doing a very good job.

But questions of what we’re basing this on aside, does 5% feel like the right number? Currently, we have about a ~10% DFW rate in our CT course. I think we’re nearing the bottom of what we can reasonably do to bring those DFW students to success – most of these are students who stopped working for reasons outside my control or who had medical crises. I’m not sure I could squeeze out another 5% without really lowering my standards for success. And that’s in a non-majors class where my expectations are very different from what I want out of CS majors.

Ultimately, I think my big reaction is that assessment is really, really hard (e.g., Allison’s work) and we aren’t good enough at it yet to really be able to micromanage our pass/fail rates too much. Whatever arbitrary number we choose as success is tied very heavily to how we measure success in the first place.

I am now Dr. Bart

posted Mar 25, 2017, 1:07 PM by Austin Bart

Last Wednesday, I passed my final defense. I am now Dr. Bart. Onwards and upwards!

Content Area Grade vs. Course Component Grade

posted Nov 13, 2016, 3:20 PM by Austin Bart

Something bothered me when designing our last syllabus. The final product looked something like this:

 Course componentPercent of total course points 
 Classwork40% 
 Homework20% 
 Final Project30% 
 Attendance10% 

This table breaks down the apportioning of points by course component. But this doesn't reflect the goal of my teaching. I am not concerned how my students do "on classwork". I want to know if they understood the actual topics that I derived from my instructional analysis. Ideally, I would be apportioning percentages for content areas:

 Content AreaPercent of total course points 
 Looping40% 
 Data Structures20% 
 Social Impacts30% 
 Syntax10% 

At some point over the semester, my students must convey to me that they sufficiently mastered the concept of how programs can loop over a list, and then they should receive some points. I want them focused on completing the topics, not "classwork".

This thought was triggered again tonight when I saw a presentation complaining about how Mean grades can be misleading. They offered an example similar to the following:

 Assignment #Grade 
 Assignment 1 95% 
 Assignment 2 95% 
 Assignment 3 95% 
 Assignment 4 70%
 Assignment 5 95%
 Final Average 88% 

They argue that, clearly, assignment 4 was an outlier that should have been removed from the final grade calculation. This student consistently performed above a 90% with only a single exception, and clearly something was wrong then. But if you're using the first syllabus I offered, then perhaps Assignment 4 was the activity on loops. It's not surprising that they would do poorly on this subject, even if they did well on much of the rest of the course content. And yet that topic is weighed as just a simple piece of the final Classwork grade. If it were the lesson on loops, then they should find it very concerning that they don't understand the material, since its so important! I would like the grading scheme to reflect that importance.

Obviously, this should be balanced with mastery-based grading; give them as many attempts as they need, within a reasonable amount of time. Give them the lesson in as many different forms as they need. These thoughts aren't about punishing students for not performing perfectly. It's about getting them to put energy where I feel it is most valuable.

Harry Potter and the Tonal Analysis

posted Sep 24, 2016, 2:39 PM by Austin Bart   [ updated Sep 24, 2016, 2:54 PM ]

Harry Potter and the Sorcerer's Stone has, roughly, 5746 sentences, spread out over 17 chapters. I know this because I have recently tried making a new style of English dataset for the CORGIS project. Previously, we had attempted to create datasets for English majors by analyzing a corpus of books and computing some statistics (difficulty, sentiment analysis, etc.). This new approach would look at a single book, which actually gives us the ability to compute some new statistics.

One of these statistics is particularly interesting: Tone Analysis using BlueMix from IBM Watson. Essentially, this service allows you to get a variety of tonal information for a sequence of sentences. For instance, you can get the estimated "Sad"ness of a sentence as a decimal 0..1 value, along with a number for "Joy", "Fear", "Anger", and "Disgust".

So what happens when you put in the text of the first Harry Potter book?

Unfortunately, that's not particularly helpful. The results of tone analysis fluctuates so much over just a few sentences that we end up with a mess of lines. I believe that the only reason its purple is because that was the last tone drawn.

It is fairly easy to create a Rolling Average with Pandas+Scipy+Numpy. This helps tease out a more useful graph, especially when the tones are standardized.


Suddenly, we can actually see trends! And those trends can be matched up against points in the book (Spoilers!):
  • Around sentence 1000, we see Hagrid confront the Durselys' and the famous "Yer a Wizard, Harry". Hagrid's pretty angry in this scene.
  • Around sentence 2000, Harry is meeting Ron and hanging out on the Hogwarts Express - high Joy!
  • And then around 2500, we head down to the dungeons with Snape and see a massive surge in Disgust.
  • Sentence 3000 is encountering Fluffy and then the Troll, which I suppose accounts for the surge in Disgust and Anger.
  • Briefly after that, near 3500, we can see the spike in Joy thanks to Harry winning the Quidditch match.
  • 4500 puts us in the Forbidden Forest - plenty of Fear and Anger there, apparently.
  • I believe that the surge right after 5000 represents the adventure down to the mirror
  • And the last few hundred sentences of Joy show the ending of the book
Now, it's not a perfect match up, and there are a lot of other trends to explain here. But I did think it was interesting to see how we could pair up important events with peaks and troughs. Unfortunately, all the postprocessing I had to do makes this kind of analysis out of the reach of most of our students without some more help. I have to carefully think about how to make this kind of data available in such a way that its still useful. Dataset creation is tricky business!

Abstract Variables and Abstracting Cows

posted Jun 16, 2016, 2:16 PM by Austin Bart   [ updated Jun 16, 2016, 2:35 PM ]

In the Computational Thinking course, we talk about "Abstraction" as the concrete representation of real-world things using computational tools (although there are other definitions, this is what we focus on). Variables are one way to represent things, but there are others. Lately, I have been considering how different variables have different levels of abstraction, and the implications for pedagogy.

Let's walk through an example. Imagine we have a herd of cows in a field.



This herd could be represented quantitatively using the following table.

 Name Weight
 Bessie  600 
 Pancake
 750
 Abigail 800
 Average 716

At the bottom row of this table, we see an "average" cow. There is no "average cow" in our herd with a weight of 716 pounds. But we understand this idea of an "average cow" as an abstraction.



Now, how would we represent this herd in a BlockPy program?



We now have a variable that abstracts our herd of cows into a list of numbers, which can be easily manipulated. By abstracting our cows, we reduce unnecessary information (their height, their name, their genetic code, how many times they've mooed today, etc.). 

We can create a variable that represents our imaginary average cow.


We cannot see the value of the "average_cow" from this perspective, although we could print out its value. It is a more abstract variable, dependent on time and other pre-existing variables.

We used built-in Python functions to quickly compute that average, but in the CT course, we don't allow students to do so. Instead, they have to use a for-each loop. The necessary code to calculate the average cow weight would be:



This code has 5 variables:
  1. my_cows
  2. average_cow
  3. total_weight
  4. total_cows
  5. a_cow

The variables "total_weight" and "total_cows" are similar to the "average_cow" variable, but they are at a higher level of abstraction than "my_cows" since they do not represent real physical entities.
  • How did we know to create "total_weight" and "total_cows"?
  • How did we know what to initialize them to?
  • How did we know to place them inside the loop to manipulate them?
And what about that "a_cow" variable? It represents each cow, but only over the entire course of the loop. It is not a specific cow until a specific time. To me, this represents an even higher level of abstraction than the other variables.
  • What is the relationship between "a_cow" and "my_cows"?
  • How do I mentally model this abstract cow?
  • How do I keep track of the cow over "time", when "time" is not immediately present in the program as written?
Looking at a completed program, I think many students are not able to recognize the varying levels of abstraction of these variables, and they struggle with writing this all from scratch.

Learning objectives in Computational Thinking

posted Mar 9, 2016, 7:46 AM by Austin Bart   [ updated Mar 9, 2016, 7:52 AM ]

In the past year, I've had a growing interest in formal methods of Instructional Design. One of my new favorite activities is writing learning objectives. I'm still developing that skill, but I though it would be interesting to share some of the objectives I've written for the Computational Thinking course. As you can see, there are a large number of learning objectives, and I doubt we actually cover all of them in the CT course. There's still a lot to improve about the curriculum. There's also a lot to improve about these outcomes. I noticed while reviewing them that there isn't anything about students working with nested control structures, for instance.

  • Algorithms

    • Control Structures

      • State the control structures of algorithms

      • Differentiate between sequence, decision, and iteration [and function]

      • Explain the behavior of nesting control structures inside of each other.

    • Program State

      • Define the concept of a program's state

      • Describe how program state changes with respect to time

      • Trace program execution with sequence, decision, and iteration

      • Relate program state with the inputs and outputs of a program.

    • Decision

      • Identify the condition, body, else-body of a decision

      • Write a numerical condition

      • Write a Boolean condition

      • Write a condition with a logical AND or OR

      • Explain the behavior of commands inside a decision.

      • Solve a problem that requires decision

    • Iteration

      • Identify the iteration property, iteration list, and body of an Iteration

      • Write an iteration over a list of complex structured data

      • Write an iteration over a list of primitive data

      • Evaluate the name of the iteration property

      • Express implicit solutions in terms of explicit iteration commands

      • Explain the behavior of commands inside an iteration

    • Documentation

      • Explain the use, power, limitation, and danger of importing

      • Define documentation for an API

      • Identify the inputs and outputs for a given function of an API

      • Explain how to search for help on an API

    • Reporting

      • Predict the printed output for a string property

      • Predict the printed output for a string

      • Predict the printed output for an integer

      • Predict the printed output for a list

      • Differentiate between storing, importing, printing, and plotting

    • Create an algorithm to solve a problem involving real-world data.

  • Abstraction

    • Types

      • List the types of data used in this course (String, integer, float, Boolean, list, or dictionary)

      • Differentiate between simple/primitive types of data and complex types.

      • Differentiate between a property and a string

    • Real-world vs Code

      • Create an abstraction, including the real-world entity, stakeholder, properties

      • Instantiate an abstraction, including the values for each property

      • Code an abstraction as an integer, string, Boolean, list, dictionary

      • Interpret an abstraction to identify the real-world entity, potential stake-holders, limitations, and potential questions it could be used to answer

    • Property Creation and Manipulation

      • Create a value to print

      • Create a list of values for a plot

      • Append values to an empty list using iteration

      • Manipulate an existing property

      • Evaluate the name of a property for clarity, correctness, and disambiguity

      • Calculate the result of an assignment involving constant expressions.

      • Calculate the result of an assignment involving self-referential expressions.

      • Estimate the result of an assignment involving a function call

      • Evaluate the benefit of storing the result of a call in a property for use

    • Dictionaries

      • Access an element of a dictionary

      • Access a dictionary inside of a dictionary

      • Identify the elements of a dictionary (name of key, type of value)

      • Differentiate between a dictionary and a list

    • Lists

      • Identify the type of a list

      • Explain the difference between an empty list a non-empty list

      • Explain the purpose of the "append" function of lists

      • Identify how to access data within a list

    • Structured data

      • Outline the structure of data

      • Give the path to an element of structured data

      • Relate a path to the needed control statements to access that data (iteration, dictionary access)

    • Analysis

      • Criticize a data set for its limitations

  • Social Impacts

    • In Society

      • Identify the stakeholders, impact, conflicts, and pressures of a scenario.

      • Evaluate a conflict to identify an ethical action to take

      • Describe the pervasiveness of computing for a stakeholder

      • Identify stakeholder pressures and judge whether they are external or internal

      • Describe the privileges (or lack thereof) of stakeholders in relation to computing

      • Describe the privacy rights and expectations of stakeholders in relation to computing

      • Describe the powers of stakeholders' over computing

    • In Your Life

      • Relate computing ethics to your profession.

      • Use ethics to decide on and defend your computing behavior

  • Data Science (Secondary objectives)

    • Plotting

      • Identify the scenarios where you would want to use a given type of plot

      • Differentiate between line plots, XY plots, scatter plots, histograms, bar charts, maps

      • Interpret a plot to answer a question

      • Evaluate a plot for its clarity and beauty

    • Statistics

      • Give descriptive statistics for a list of numbers (min, max, mean, std dev)

      • Explain the value of reporting the standard deviation alongside the mean.

    • Project Management

      • Develop a project iteratively

      • Debug a program that is not behaving

      • Detect the problem in a program

      • Work on a project early, consistently, and in reasonable chunks.

How to Get into CS Teaching from CS

posted Oct 29, 2015, 9:28 AM by Austin Bart   [ updated Jul 20, 2017, 9:21 PM ]

I've decided to try and share some introductory materials for getting into teaching. Mark Guzdial announced that he'd be posting the syllabus for his CS Teaching Methods course, and recently two CS people have asked me how to get into CS Ed research. Clearly this is the hot time to be dragging people into the best CS subfield.

The fact is that there isn't too much published Pedagogical Content Knowledge about how to teach Computer Science. There's a lot of Pedagogical Knowledge out there, and as CS people we already have Content Knowledge. But the community is still establishing what's different about teaching CS compared to, say, teaching someone how to make Cigar Box Guitars.

So most of this page is devoted to good teaching links, and some specialized CS information. At some point I may toss out my own model curricula for a "CS Teaching Methods" course. There'd probably be a lot of theory (Instructional Design, Gange's Learning Events, Motivation, SL Theory, Constructivism, Cognitivism, etc.) and some large amounts of practice. 

LearningAndTeaching.info

Formerly http://www.learningandteaching.info/ (I have mirrored the content: http://acbart.com/learningandteaching/LearningAndTeaching/www.learningandteaching.info/index.html)
Hands down the best resource I've ever found for growing as a teacher/researcher/learner is this website. Unfortunately, it went down a year or two ago, for some reason. But you can still access it through the wayback machine!
The site is divided into three sections:
  1. Theories
  2. Practice
  3. The authors' thoughts
I recommend starting in column 1 if you are new to education research, and starting in column 2 if you are eager to start teaching and building curriculum.

Learning Theories

There are many theories related to learning, teaching, motivation, etc. This site gives you a crash course in all of them. It is overwhelming at times, and learning how to navigate these theories is tricky. Over time, you should come to view learning theories as lenses. Different theories are valuable at different times. For example,
  • Situated Learning theory can really help with high level conceptual skills that benefit from a lot of social interaction
  • Constructivism is powerful for describing partially structured learning experiences meant to build on prior knowledge.
  • Behavioralism is great if you have to learn a habit.
There's a lot more out there than just Behavioralism, Constructivism, and SL Theory; although if you stick to reading CS Ed literature, you might think those are all there are. Don't get caught in that trap, but explore as many theories as possible and see how they can help you. It's like Object-Oriented vs. Function programming: different tools at different times.


SIGCSE Proceedings, Journals, Etc.

There is a tremendous amount of research out there about how to teach CS. As a community, we're till trying to distill the best of it out, but you should definitely read some of these proceedings and journals. In particular, look at:
  • SIGCSE
  • ITiCSE
  • TOCE
  • ICER
  • KOLI
  • CCCS

The MUSIC Model of Academic Motivation

I do a lot of motivation research, and the MUSIC model is by far one of my favorite theories. I'm biased, since Dr. Jones is on my committee, but I asked him to be on my committee because it's a good theory, so there you go.

CS Teaching Tips

I don't agree with all the tips that are here. Some of them are quite interesting, some of them are quite useful. But definitely take them with a grain of salt.

Exploring Computational Thinking

Google has a really nice sample curriculum here for teaching Computational Thinking to K-12. I think it's a nice model of what we should be building as a community.

Get Out and Practice

There is no teacher like experience. You can read guides all day, but honestly you're better off getting real world experience as quickly as possible. Teaching is one of those domains where Situated Learning techniques really pay off. So work with a local high school to teach a class or something.

Motivation × Situated Learning

posted Sep 9, 2015, 8:55 AM by Austin Bart   [ updated Sep 9, 2015, 9:00 AM ]

As I write my prelim, I'm re-immersing myself in Situated Learning Theory. I've been obsessed with Instructional Design for the past few months, so I haven't thought much about this alternative educational theory. However, I started using it early in my graduate career for a reason, and those reasons still make sense. I'm using a particular evolution of SL Theory [1] that breaks down the learning process into four components: the context, the content, scaffolds, and the assessment. In the latest iteration of my prelim, I'm crossing this with the MUSIC model of academic motivation to explore how the components provide opportunities to motivate the learners - in particular, what the context and scaffolds bring in.

While working this out, I made the following table to organize my thinking. I thought it was an interesting break-down of where motivation could apply within a learning experience. There's a lot more not captured here in a learning experience, but this is what was useful for my research.
Situated Learning Component:ContextContentFacilitationsAssessment
Example"Game Design""For Loops"Blocks-based environment, variable explorer, teaching assistants, etc.Exams, performance review, code review
eMpowermentAm I restricted by the context to explore what I want?Do I have control over the depth/breadth/direction of what I am learning?Do these scaffolds let me accomplish things I couldn't? Do they artificially restrict me?Do I have the freedom to explore my limitations and successes in this assessment?
UsefulnessIs this situated in a topic that's worth learning?Is the content itself worth learning?Do these scaffolds let me learn enough to still be useful?Do I feel that performing well on the assessment is important?
SuccessDo I believe I can understand this context?Do I believe I can understand this material?Do these scaffolds hinder me or help me?Can I suceed at this assessment?
InterestIs this situated in something I find boring/interesting?Is the material inherently interesting?Do the scaffolds support my interest in the activity or detract from the experience?Am I interested in the assessment experience?
CaringDoes the context give opportunities for the instructor and peers to show they care?Does the content give opportunities for the instructor and peers to show they care?Do the scaffolds give opportunities for the instructor and peers to show they care? Do the peers and instructors have an opportunity to provide support themselves?Does the assessment give opportunities for the instructor and peers to show they care?

This is only a small part of what I'm doing in my prelim overall. For instance, there's all the fancy software I'm writing to fill out the scaffolds column. Still, I think this is an interesting way of looking at the components of the course.

[1] Choi, Jeong-Im, and Michael Hannafin. "Situated cognition and learning environments: Roles, structures, and implications for design." Educational Technology Research and Development 43.2 (1995): 53-69.

1-10 of 20