Design-based research (DBR) is a key method in the learning sciences, which is used to simultaneously develop both learning theory and the design of instructional interventions. This article is an exceptionally short introduction to DBR; there's tons of scholarship on this idea.
DBR begins with a design problem in education, such as a teaching a difficult concept, improving a dismal undergrad success rate, or similar. DBR turns to learning theories (or other theories in education) to explain why that problem might be difficult to learn or undergrads might not succeed, and to suggest what the shape of solution might be. These ideas from theory are used to inform the design of the intervention: new curricula, new learning environment, new experiences. The intervention is tested on real learners in an ecologically-valid context, like a classroom (though early cycles may occur in more controlled environments). Data are collected on how learners interact with the intervention, preferably process data as well as outcome data. Data are analyzed in light of the theory to understand the ways in which the design and implementation of the intervention are successful and to suggest what elements of the design or implementation could be altered to improve student learning (or whatever the key outcomes of the design problem are). Researchers engage in multiple cycles of design construction, implementation, evaluation, and revision in order to (a) solve the design problem and (b) generate new knowledge & learning theory about student learning and development.
The outcome of a DBR project is twofold: the solution to the design problem as well as the new knowledge about learning theories. Both aspects of the outcome are necessary for a successful project; you can't just do stuff and not generate new knowledge about it, and you can't just generate new knowledge without finding a solution to the design problem.
While there is an aspect of emergence in subsequent cycles for intervention design and learning theories, theory and intervention are necessary for all cycles. Because theory guides which kinds of data are meaningful, the general family of theories usually does not change radically from cycle to cycle. Similarly, while the design problem may grow or change in subsequent cycles, because the intervention is intentionally chosen and modified, the design problem generally does not change radically from cycle to cycle.
A quick note on language:
A "design problem" stands at the same level as "research question": it's a guiding idea that shapes your project. Design problems and research questions are living. They will grow and change with time.
I'm using "students" to mean "participants" because this is often used in classroom settings, but "participants" is also ok.
I'm using "intervention" to mean a wide variety of things: new curriculum, new course policies, new learning environments, new learning experiences, etc. The key parts are: there's something new about it, and it is enacted in a natural environment (e.g. a classroom, not just a psych lab).
Articulating a good design problem entails figuring out what problem you want to solve, why it is a problem, and what the shape of a feasible solution might look like. As you work on this,
Be specific, and ground it in a specific context.
Who has this problem? How do you know?
What's the scope of this problem? What can you actually change or improve?
What is the current situation, and why is that a problem?
Talk to stakeholders: do they think this is a problem too? why?
What are other solutions (perhaps partial) that exist?
As you work on articulating your design problem, you may find that you need to conduct a literature review to understand elements of the problem or look for solutions to related problems. As you talk to more people about it, you may find that your ideas shift about what the problem is and why it is a problem. This is great.
Nota bene: a design problem is not a problem with your students. It's a problem in their environment that you could, in principle, solve. For example, you might notice that the student failure rate ("DFW rate") in calculus is 30% (yikes!). The design problem is not "my students are too poorly prepared" (a problem with the students); the design problem might be "our placement test doesn't sort students well" (a problem with the environment before calculus enrollment) or "calculus class doesn't support student learning well" (a problem with the environment of the calculus class).
Design-based research relies on theory to shape why these problems are problems and to suggest the shape of a solution. If you want to fix a problem, you need to understand why it is a problem so that you can tackle the cause. The job of theory is to suggest possible causes for the problem, and to help shape potential solutions.
For example, you might notice that some students struggle to solve mathematical problems in their physics classes. Different theories conceptualize this struggle differently:
A cognitive theory of learning might suggest that students have a hard time seeing how to apply math ideas in physics contexts. We need to support their transfer of ideas by explicitly eliciting math class procedures and drawing direct parallels.
A different cognitive theory of learning might suggest that their problem solving skills need support, especially for complicated problems that coordinate math and physics ideas. We need to support their growing problem solving ability by scaffolding the steps of problem solving.
A socio-cultural theory of learning might suggest that different students struggle at different points. We need to support them working together to connect and rediscover math ideas in this new context.
A critical theory of learning might suggest that students struggle to solve these problems because they seem irrelevant to their lives in the face of gross structural inequities. We should reimagine the topics our physics courses cover to better support a just and educated populace.
There are tons of different theories available to you. Thinking about why your problem is a problem and what kind of solution might solve it can suggest to you what family of theory you should use. Through your conversations with stakeholders and literature, you will discover potential theories. Try them on: how does each one suggest the shape of a solution? What feels satisfying to you? You may discover that you want to use different theories to articulate and refine different aspects of your design problem. That's normal.
Through conversations with stakeholders and literature, you will start to see the shape of a possible solution to your design problem. The shape of this solution should include some kind of intervention that changes the environment in which your participants learn. For example, you might decide to develop new curricula, as guided by the theory you chose.
How much time do you expect your participants to interact with it? Very brief interventions are easier, but generally have less potential for impact.
How many topics will you cover? Which aspects of the class?
How many participants will you have? Are they enrolled together, or are these multiple classes?
How many times can you iterate your intervention to make adjustments?
DBR projects rely on iterative cycles of implementation and development. As you think about your intervention, think about how many times you can iterate through it. Two cycles are a minimum; 3-4 are more common. It's also common to iterate through each piece several times, and to use lessons learned in each piece to suggest developments in other pieces. For example, if you want to develop seven new labs, you might plan to develop two in the first year, then refine them and develop five more in the second year, and refine all seven for coherence in the third year. For each iteration, identify: what's the minimum viable product for each cycle? What additional features would be nice to have, but not necessary in the core?
Build a plan for your intervention that explicits maps between the theory you choose, the elements of your design problem, and the elements of your intervention. For each piece of the map, ask yourself:
How does this piece of the intervention help solve my design problem? How does it use theory?
How does this theory help articulate my design problem? How does this theory suggest the shape of the intervention?
Why does this design problem require this piece of the intervention? Why do I need this theory?
How does each iteration build new features or topical coverage in the intervention?
Building this map is a lot like engaging in conjecture mapping, a more robust tool from the Learning Sciences that explicitly draws together your theory and your observations to make meaning and conclusions.
The last piece of a DBR design is data. How will you know if your intervention is successful? In what ways does it succeed, and how should you make changes for the next iteration?
Your theory and design will help guide you about what kinds of evidence can show success. If you want students to learn more, for example, you will need to show what they knew coming in and what they know coming out; if you want them to have stronger problem solving skills, you need evidence of their problem solving skills; if you want them to feel like they belong in STEM, you need measures of belongingness. All of these are outcome measures of success: what is true at the end of the intervention?
Because DBR relies on iterations to make changes and improve the intervention, it is crucial that you also collect process measures of how your participants interact with your materials. It's not enough to know that your materials are helpful (or not); you need to know how they use them so that you can improve them. Common process measures include observations or interviews; it's also possible to look at artifacts like whiteboards or homeworks. Because you're going to use this information to help you make decisions about what or how to change, it's important that your data are as rich as possible: they should show processes of interaction among students and how they arrive at their answers, and include information like the problems students are working on as well as their solutions. You can't just compare answers on the homework to answers on the final exam: that information doesn't tell you about how they got to each answer.
As you think about the design of each iteration, plan to collect evidence. Ask yourself:
What data will you collect? How will it be analyzed? Make sure this data is well-aligned with your theory.
Will some data streams be consistent in each iteration? Will some be unique to specific iterations?
How much data do you need to make good development decisions? To show success?
For some people, it's really tempting to collect all possible data because they don't know what they'll need, or because they think it might be interesting or useful later. For these people, the major danger is that they'll spend so much time on data collection that they won't have time for meaningful analysis. If this is you, ask yourself: How much capacity do you have to collect and analyze data between iterations, so that you can make good choices? Which data streams are necessary, and which are "nice to have"? Judiciously pare out data streams that do not directly help you solve your design problem and speak to your theory.
For other people, the access that they have to these learning environments constrains their data-taking abilities. Perhaps they cannot do classroom observations because of university policies, or they don't have enough research staff to perform interviews. If this is you, ask yourself: What information do I need to make evidence-informed choices about how to iterate? Can I get this information in another way? For example, instead of classroom observations you might ask faculty for their reflections on what went well, or instead of 20 interviews in person, you might have 2-3 shorter zoom interviews with a selection of stakeholders.
Return to your map between design problem, theory, and intervention. For each element, identify which data you will collect and how you will analyze it in order to (a) show success; and (b) suggest iterative changes.