Design Research

Design, build, reflect, improve, repeat

Design as Research

Bannan-Ritland (2003) described a framework for the placement of design into context as a research methodology. Her integrative learning design likens the development of educational experiences or interventions to an engineering process that occurs in a classroom environment. A practitioner or researchers define a problem, identify constraints, research relevant theory and hypothesize possible solutions. They enact their design and test it in a real learning environment, collecting and evaluating data from their efforts. Bannan-Ritland describes the steps of integrative learning design as follows:

  1. Informed Exploration: problem identification, survey literature, define the problem
    1. Needs analysis embedded, which is new-ish compared to traditional ed research
    2. Audience Characterization - also new-ish (LAO as an example - included stakeholder perceptions and target audience as part of the study)
  2. Enactment Phase - intervention gets articulated and then redesigned in iterations in order to test hypotheses
    1. Design of an intervention
    2. Prototype articulation
    3. More fully detailed intervention
  3. Evaluation Phase - Local Impact
    1. Evaluate local impact: are stakeholders and clients well-served?
  4. Evaluation Phase - Broader impact
    1. global applicability and sustainability
    2. Publication is augmented with discussions of adoption or adaptation to other situations.
    3. Considers unintended consequences that may occur during transfer - both positive and negative
(Bannan-Ritland, 2003)

Aligning Design with Theory

Design research may be utilized with the end goal of building educational theory, as Cobb et al. assert (2003). Other writers don’t place constraints on the design medium in the same way, seeing it as a versatile vehicle that can be used to speak to both practical and theoretical challenges. In the most general sense, it refers to the process of planning educational experiences or infrastructures, implementing them, and then improving them with the next iteration. Hannafin, et al. stress the importance of making sure that a design is well-aligned with the theory that it purports to utilize in the promotion of student learning. They use the term grounded design to emphasize the connection between theory and praxis. Four conditions to being considered grounded include 1) the design is founded on a “defensible” theoretical framework, 2) chosen methods must be research-based in support of the theoretical framework, 3) methods must be generalizable to more than one specific situation and 4) iterative implementation and reflection validate the design. (Hannafin, Hannafin, Land, & Oliver, 2016, p. 104)

In the context of a research-practitioner, Hannafin, et. al. also discuss the challenges of building constructivist and authentic inquiry classrooms and they emphasize the importance of iterative design as a way to frame teacher improvement from year to year. They also frame design as a framework in which a teacher might be able to meet the challenges of inquiry learning while being able to incorporate elements of direct instruction or objectivist practices. They assert that teaching practices and classroom environments are often mismatched with the stated priorities of a school or a teacher and that such mismatch is especially ubiquitous in those that promote ideals of constructivism. They discuss the idea of grounded design as a possible solution to the problem of mismatch between ideals and praxis. (Hannafin, Hannafin, Land, & Oliver, 2016)

Evaluating a Design

Viewing an education design research project from the perspective of product design engineering provides some insight on how the value of a design might be judged. A major consideration of success is whether a particular innovation or design will be perceived as better than the technique or product that it will be replacing. (Zaritsky, Kelly, Flowers, Rogers, & O’Neill, 2003) This highlights an important potential consideration for using a chatbot as an assessment. The so-called perceived relative advantage must be part of any evaluation of success. If practitioners do not view the chatbot as having greater value than a static exam item, the dissemination of the assessment medium will be lackluster. Zarisky, et al. (2003) also draw on product design to delineate a process that includes identifying plausibility, detailing the design, prototyping, testing, redesign, and diffusion of the innovation. They highlight the use of both qualitative and quantitative methods in evaluating the success of a design. They identify other production industry metrics that might be of use, including compatibility--the extent to which a design might mesh with existing goals, values, or needs-- and complexity, which refers to the degree of difficulty in using the end product.