Rationale: This doctoral dissertation study focuses on how two different models of integers and integer operations -- a collecting-objects model using two-colored chips and a moving-on-a-path model using a number line -- compare in terms of helping students correctly operate on integers and develop meanings for the "-" symbol. More specifically, the researcher focuses on examining how the physical movements associated with use of each integer model, and how those movements coincide or conflict with the mathematics they represent, serve to promote or hinder understanding of integers. She motivates this work by citing three broad findings in past literature: (1) Students often struggle with integer operations and this can hinder later mathematical learning; (2) Both chip models and number-line models have been shown to be helpful to students in certain contexts; and (3) Physical motion has effects on student learning. She positions her study as bringing together research on physical movement and integer learning, as well as filling a gap in the current research; current studies for the most part focus on one model, and not on comparing models directly.
Research questions (Study 1): The dissertation is organized into two studies. In the first study, the main research questions were: (1) Of the collecting-objects and moving-along-a-path models, which better supports learning of different aspects of integer knowledge? (2) How does the level of consistency of the students' physical movements with the underlying mathematics affect learning?
Findings (Study 1): There were two key findings. First, the moving-on-a-path model, using the number line, appeared to be more effective overall in supporting learning; students using this model performed significantly better on either the delayed posttest or both the delayed and immediate posttests for all assessed aspects of integer learning except for addition of additive inverses. Moreover, while students using the collecting-objects (chip) model performed better on the assessments of addition of additive inverses on the immediate posttest, this difference disappeared in the delayed posttest. The second key finding was that the advantage of the number-line model over the chip model was greatest when the movements required to use the chip model were inconsistent with the mathematics, e.g., when students had to add chips to the model before they could do the "taking-away" needed for a subtraction problem, as when solving -2 - -6 by starting with two negative chips, adding 4 positive and 4 negative chips (for a net gain of 0), then taking away 6 negative chips (leaving 4 positive chips, or 4, as the result).
Research questions (Study 2): In the second study, the main research question was: How do students reason about the "-" symbol in various positions within mathematical expressions, and does the model they used during instruction affect their reasoning?
Findings (Study 2): The key finding was that both models appeared to support correct interpretations of "-" as indicating subtraction and as indicating a negative number; however, the number-line model appeared to better support interpretation of "-" as the operation of taking an opposite.
Response: As I have interest in both the development of mathematical thinking and in embodiment and gesture research, I found this work quite interesting. I was most intrigued by the notion of comparing the alignment of the physical gestures needed to use a model with the underlying mathematics. While I agree with the interpretation of the results -- it does seem that the inconsistencies hindered students' abilities to correctly calculate -- I do wonder whether this finding suggests abandoning the model altogether in favor of number line model for early learning, as suggested by the author, or if it might be equally effective to reframe use of the model to emphasize that the minuend needs to be transformed before the subtrahend. This transformation could be connected to the idea of trading 10s for 1s in whole-number subtraction, providing a different (and I think, more powerful) sort of connection to earlier work than the connection to taking-away.
2. Tatar, D., Harrison, S., Stewart, M., Frisina, C., & Musaeus, P. (2017). Proto-computational Thinking: The Uncomfortable Underpinnings. In P. J. Rich & C. B. Hodges. (Eds.) Emerging Research, Practice, and Policy on Computational Thinking (pp. 63-81). Cham, Switzerland: Springer.
Rationale: The authors point out that the approach of offering programming or other computer-science-specific courses in K-12 has the potential to perpetuate inequities, as the students in the schools with the most resources will be the most likely to have access to such courses. Integrating computer science and computational thinking with core subjects, however, can reach all students because core subjects are required for advancement through school and graduation. Hence, the authors argue for the need to explore the integration of computational thinking (CT) into core subjects in middle school.
Research questions: The authors do not explicitly state a research question, but I would characterize their work as attempting to answer questions such as the following:
Findings and evidence: The authors describe a design-based, participatory research project wherein teachers identified a skill or topic that was particularly difficult for their students, undergraduate students in CS and engineering proposed a project that would use CT to address the problem, and the activity was iteratively designed and then used at least once in the classroom.
The results were a set of activities that were reasonably successful in addressing the teachers' problems of practice. The authors illustrate this through two detailed examples, one focused on fraction learning and the other on conducting internet searches. They note that the CT-related content may not be recognizable to computer scientists as CT, and instead may be better characterized as "proto-computational thinking" (PCT), or something that is building readiness for later computational thinking. For example, the CT originally intended to be addressed in the fractions activity was connecting representations, and instead the finished activity simply emphasized different aspects of representations without requiring students to connect them.
The authors characterize this finding as "uncomfortable" because they came to the idea of PCT by addressing the needs of students falling behind grade level. It was through attention to the SES achievement gap that the idea of explicitly building readiness for CT came out, calling into question whether the field should be teaching CT to anyone without addressing PCT first.
Response: This piece really resonated with me, as I've been struggling for several years with the relationship between computational thinking and mathematics. On one hand, many processes of logical thinking are used in both CT and mathematics, so it can be hard to tell them apart. On the other hand, many argue that CT must necessarily involve some attention to a computer -- programming, or reformulating a problem to be solvable by a computer -- and so the two can seem separate. This piece identifies a kind of prerequisite to CT that could be more closely connected to mathematics than CT proper. I think the idea of proto-computational thinking might resonate well with elementary school teachers, who are unfamiliar with CT and dubious of introducing a brand new subject in their classrooms.
ADDENDUM: Notes on Methods and Measures
Research design: The researchers describe their methods as a combination of design-based research and participatory design, supplemented by "ethnographically informed qualitative methods."
Data collected: Semi-structured interviews of teachers; video and notes of classroom observations (both before designing the new activities and during use of the new activities.
Measures and Analysis: The researchers describe the focus of their participant observations as trying to understand the relationships between the teacher, students, and instructional materials. However, very little information is given about how they analyzed the various forms of data and how they came to the results presented in the paper.
Thoughts: There are two ways to interpret the lack of detail in methods. One interpretation might be that this article describes more of an experience report than a study, per se. Another is that expecting specific measures is only appropriate for certain kinds of research. I'm not sure how to look at this issue. UPDATE: Per my discussion with Jack on November 13 (see Meetings page), it seems that the second interpretation is correct.
3. Moyer-Packenham, P., Salkind, G., & Bolyard, J. (2008). Virtual Manipulatives Used by Teachers for Mathematics Instruction: Considering Mathematical, Cognitive, and Pedagogical Fidelity. Contemporary Issues in Technology and Teacher Education, 8(3), 202-218.
Rationale: Virtual manipulatives are now available to teachers in a variety of free online collections, and professional development workshops focusing on use of such manipulatives are becoming more common. The authors claim, however, that not much is known about how teachers use virtual manipulatives in their classrooms. This study fills that gap.
Research Questions: The authors state the research question as: "What virtual manipulatives are used by teachers in mathematics lessons and how are they used?" (p. 206).
Key Findings and Support: Through analyses of 94 teacher-created lesson summaries that involved the use of virtual manipulatives, the authors reported the following findings:
Response: I liked this study because it looked at virtual manipulatives from a teacher's perspective, whereas a lot of studies of virtual manipulatives I have seen in the past focus only on students' perceptions. This alone made it a nice window into the overlap between two of my research interests: teacher planning activity (and curriculum adaptations), and students' use of virtual manipulatives. I found the results quite interesting, though I do think the authors speculate rather a lot about the implications of the results. I do not think that teachers' use of virtual manipulatives by themselves in half of the lessons necessarily implies, for example, the the virtual manipulatives are doing the job of two separate representations. The discussion section of this paper almost felt like it belonged in a different paper. However, it did spark my thinking quite a bit, so I chose to keep it in my RDP collection.
ADDENDUM: Notes on Methods and Measures
Research design: Not clearly specified, but it appears to be a form of document analysis?
Data collected: Lesson summaries from 116 teachers. The lesson summaries included both teachers' plans and their self-reported descriptions of what actually took place in the lesson. Only 95 of the summaries used VMs, so these were the object of analysis.
Measures and Analysis: Analyses, conducted sequentially, focused on extracting the following from the lesson plans: the mathematical content taught, the type VMs used (though "type" is not defined), the categories of how teachers used mathematical tools (with categories defined via constant comparative analysis of the full corpus of lessons), and whether or not VMs were used in conjunction with physical manipulatives.
Thoughts: This is a kind of descriptive coding and analysis that I find characterizes a lot of exploratory research.
4. Anderson-Pence, K., & Moyer-Packenham, P. (2016). The Influence of Different Virtual Manipulative Types of Student-Led Techno-Mathematical Discourse. Journal of Computers in Mathematics and Science Teaching, 35(1), 5-31.
Rationale: The authors point out that influential organizations (such as NCTM) and documents (such as the Common Core State Standards for Mathematics ) call for students to learn how to engage in mathematical discourse as part of their mathematics learning. They also point to the widely available collection of virtual manipulatives (VMs) being used by teachers, and call for research on how these free and potentially transformative tools relate to students' mathematical discourse.
Research Question: "How do different VM types influence the levels of generalization, justification, and collaboration in students’ mathematical discourse?" (p. 7)
Key Findings and Support: Through analysis of students' discussions while using three types of VMs (those with linked representations (linked VMs), those that attempt to replicated physical manipulatives, but virtually (pictorial VMs), and those that provide direct feedback on particular strategies for their use (tutorial VMs)), the authors found the following:
The authors use these findings to argue that linked VMs, in particular, show potential for promoting quality student discourse.
Response: One key part of the discussion in the article that struck me was the following: [When it comes to pictorial VMs] "The meaning of the representation was not as explicit as with the linked VMs. Therefore, the students had to assume responsibility for making connections for them- selves." The text goes on to frame this as something negative, citing the result that the quality of discourse was low with pictorial VMs than with linked VMs. However, in my experience, one of the key debates about use of technology in mathematics classrooms is whether the technology takes too much mathematical authority away from students. That is, some naysayers against too much tech in the classroom would say that the fact that pictorial VMs require students to make their own mathematical connections is a good thing. I would have liked to see more discussion of the pros and cons of this within the article.
ADDENDUM: Notes on Methods and Measures
Research design: Quasi-experiment. All students were exposed to all three types of VMs. It's not clear that there was a control of the kind of randomization required for an experiment.
Data collected: Screen recordings and video of pairs of students working with different kinds of virtual manipulatives.
Independent and dependent variables: IV was the type of manipulative: pictorial, linked, or tutorial. DVs were quality of discourse of the pairs.
Measures and Analysis: Quality of discourse was measured through coding of discourse turns using a scale of 0 to 3 to indicate levels of generalization, justification, and collaboration. The scales were used to compute a composite score for each episode on each scale. These composite scores were then compared with ANOVA tests.
Thoughts: This is a nice example of how a seemingly qualitative data source can be quantified.
5. Gueudet, G., Pepin, B., Sabra, H., & Trouche, L. (2016). Collective design of an e-textbook: teachers' collective documentation. Journal of Mathematics Teacher Education, 19(2-3), 187-203.
Rationale: The authors point out that e-textbook use is on the rise, and what although researchers have begun to study how teachers take up and use e-textbooks (in comparison to print textbooks), less attention has been paid to the way that e-textbooks make teachers able to influence the design of textbooks much more than before. This study aims to being filling that void by studying how a group of teachers collaboratively designed a digital textbook from shared resources.
Research Questions: The research questions were as follows (this is a slight paraphrase from the article): For the specific case of an e-textbook,
1. What are the design processes teachers used to create one?
2. What factors shape the choices of the teachers for the book's content and structure?
3. What are the consequences for the design in terms of the development of a community of authors?
Key Findings and Support: To answer the research questions, the researchers analyzed use and creation of resources in relation to discussions on the development discussion board. Specifically, they looked at instances where a teachers expressed a disagreement with a coauthor and followed how negotiation within the discussion threads was reflected in the shared resources that were produced. The main findings were:
Response: I was excited to read an article that focused so directly on a key interest of mine: how digital environments offer different design considerations for textbooks (as compared to the creation of print textbooks). It was very interesting to me to read about teachers' negotiation of the tension between flexiblity and coherence, and how they developed "kernels" to reflect the essential structures that couldn't be changed. I would like to know more about how they decided what was inside the kernel and what was outside it.
ADDENDUM: Notes on Methods and Measures
Research design: Case study.
Data collected: Web-based discussion strings and resources offered on the community mailing list and developed from teachers' discussions.
Measures and Analysis: Following their theoretical frameworks (documentation and cultural-historic activity theory), the researchers were attempting to trace and describe the development of both personal documents and collective documents. They located these documents by extracting statements of personal belief from the discussion chains, looking to see if these beliefs connected to resources (signifying a personal document), and then traced discussions further, looking at changes in the beliefs and documents (changes indicating a potential collective document).
Thoughts: Based on what I remembered of this article before revisiting it, I thought I was going to have a hard time finding any discussion of measures. I remembered it as pretty descriptive, like the Tatar et al piece above. These authors do have a pretty clear description of their analytic method. However, I still find it hard to connect this method directly to their description of results. I'm left wondering if the open-endedness and exploratory nature of the study contributes to this. Are explorations really measuring something? If not, what's a different way to think about the methods for these studies that might be more productive? UPDATE: Per my meeting with Jack on November 13, my new understanding is that this is appropriately labeled descriptive research.
6. Frykholm, J., & Glasson, G. (2005). Connecting Science and Mathematics Instruction: Pedagogical Context Knowledge for Teachers. School Science and Mathematics, 105(3), 127-141.
Rationale: The authors point out that scholars have been calling for integrated instruction for over 100 years, but teacher education programs have not been designed to help teachers teach in an integrated way. This study aims to fill that hole. Drawing on research that shows that learning is contextually based (e.g., Lave's studies of situated cognition), they propose a teacher education course designed to help preservice teachers find contexts that serve as ways to connect mathematics and science.
Research Questions: The research questions were as follows: What are preservice teachers' understandings and experiences related to connected math and science teaching? What are their perceptions of their own knowledge base for teaching math and science in an integrated way? How does a context-based approach to connecting math and science influence preservice teachers' thinking about these issues?
Key Findings: Participating preservice teachers worked in groups (mixed acoording to whether math or science was their chosen specialty) to create an integrated math and science curriculum unit focused on a shared context. The key findings, based on analyses of the units, participant journals, and pre- and post-interviews, were as follows:
Response: Two aspects of this article were particularly interesting to me. First, collaboration between mathematics and science teachers was key to the participants' experiences with integration. When I draw a parallel to my particular research interest, which is integration of mathematics with computational thinking, it strikes me that there are no computational thinking teachers with whom mathematics teachers might collaborate, at least at the elementary level. Thus, it is unclear how this promising structure could be applied in math + CT integration courses. Second, the result showing that teachers mostly viewed mathematics as a tool for accomplishing other things, and the authors' concern that this is an impoverished view of mathematics, was interesting to me because I am not bothered by thinking of mathematics as a tool. Indeed, one of the reasons I think math and CT might be a good fit for integration is because each has a tool-like nature and can be applied to other areas.
ADDENDUM: Notes on Methods and Measures
Research design: Researchers collected many types of data for triangulation, and based their analytic approach on the work of several qualitative researchers. The processes included open coding and domain analyses.
Data collected: Audio-taped large-group discussions, audio-taped small group collaborations, observation notes, written work of students in response to instructor questions, participant journal entries, audio-taped group presentations, finished curriculum units and lesson plans, classroom observations.
Measures and Analysis: Based on the stated research questions, one could say the researchers were attempting to measure "understandings and experiences" of the participants, their perceptions of their own knowledge, and influences on their thinking. There was no formal measure implemented of these things, but rather the areas of interest likely influenced the themes that the researchers saw in the data.
Thoughts: This study was open ended and exploratory like some of the others I have revisited, but I thought it did a really nice job of explaining the grounding of the analytic approach and the emergence of the measures (in contrast to the other studies).
7. Hansen, A., Mavrikis, M., & Geraniou, E. (2016). Supporting teachers' technological pedagogical content knowledge of fractions through co-designing a virtual manipulative. Journal of Mathematics Teacher Education, 19(2-3), 205-226.
Rationale: The authors based their work on three generalizations from previous research: (1) PD focused on student thinking is productive for teachers; (2) teachers adapt resources as they use them in their instruction; and (3) communities of practice are beneficial. The authors aim to bring together these three promising ideas related to teacher learning by engaging teachers in the process of codesign of a digital tool (related to the adaptation in (2)), in the context of a professional development (PD) workshop that is focused on student thinking about fractions (as in (1)) and designed to promote a community of practice (as in (3)).
Research Questions: Though not stated explicitly in the article, the following research questions seem to guide this work:
Key Findings: The findings were reported with reference to the phases of research. Phases 1 and 2 were related to the workshop itself, whereas phases 3 and 4 were related to teachers' use of the virtual resource in their teaching. Key findings related to phases 1 and 2 included:
Key findings related to Phases 3 and 4, which were reported with respect to one particular teacher, were:
Response: I thought this article provided reasonable evidence that engagement with the manipulative in question (Fractions Lab) prompted new kinds of thinking about fractions in teachers. What I don't think the article did was provide enough evidence that the process of co-design, specifically, is what led to the teachers' (self-reported) learning. Rather, based on the evidence provided, it seems that same results could have been achieved through professional development helping teachers to discern the affordances and constraints of the tool, without bringing design into the discussion. The authors purposefully did not focus on the co-design process, saying that they wanted to focus on the role of the tool in the community of practice. This would have been fine with me if they did not continually connect teachers' activity with the process of co-design in the discussion. There is a disconnect there. I liked this article as an example of the role of digital tools in teacher learning, but not as an example of the role of co-design in teacher learning.
ADDENDUM: Notes on Methods and Measures
Research design: Design-Based Research (DBR), with a focus in this article on the cycles of revision based on teachers' feedback. Dual attention was given to the development of the tool and changes in teachers' learning, via a cooperative inquiry approach where teachers were considered co-researchers.
Data collected: Within Phases 1 and 2 (study of teachers' experience in a codesign session), the data were recorded discussions among teachers and survey results showing teachers' thoughts on whether the experience changed their own understanding or plans for how to teach fractions. In Phases 3 and 4 (study of teachers' use of the codesigned tool), the data were recorded sessions of a teacher working with a small group of students and open-ended interviews with the teacher immediately following this interaction and 6 weeks later.
Measures and Analysis: Although this is not stated explicitly, the researchers seem to be measuring teachers' use of different types of knowledge by analyzing their discussion, survey responses, and interviews through the lens of the TPACK framework.
Thoughts: Based on my memory of this study before revisiting it, I would have expected some explicit measures of changes in the tool or its effectiveness. Instead, all the procedural discussion focuses on the analysis of teachers' knowledge through use of the TPACK framework. (In retrospect, this makes sense given that the article appears in the Journal of Teacher Education.) Then the results discussion is dominated by discussion of the Likert-scale responses to the survey. Overall this attention to the methods leaves me quite confused about the intent of the study.
8. Hoyles, C., Noss, R., Vahey, P., & Roschelle, J. (2013). Cornerstone Mathematics: Designing digital technology for teacher adaptation and scaling. ZDM - International Journal on Mathematics Education, 45(7), 1057-1070.
Rationale: The authors discuss the many barriers to widescale adoption of technology innovations in mathematics education, one of which the lack of attention to providing support to teachers who must make adaptations for their particular contexts. Much existing research, they point out, looks at the efficacy of a particular innovation in a particular context, without attention to how different classrooms might need to change the implementation. By studying the implementation of a technology innovation, previously shown to be effective at promoting student learning in the U.S., in schools in England, these researchers are attempting to better understand how technology innovations can support adaptations to context while still maintaining their efficacy.
Research Questions: Using mixed methods, the researchers explore these five questions:
Key Findings: The achievement scores of students in England were similar to those of American students in the treatment group of a previous random control trial of the SimCalc materials. Teachers found the materials manageable, useful, and engaging to students. Teachers often adapted the lessons to have a standard 3-part format that is considered the standard in England. They often adjusted the pacing, but were enthusiastic even when the pace was slowed because they believed the ways that the technology was pushing their pedagogical thinking and students' mathematical thinking was productive. Most teachers used the technology tools almost every day, suggesting they had overcome any initial trepidation. They saw the dynamic representations as powerful tools for revealing and revising student misconceptions.
The researchers synthesize these results into three design principles for technology innovations that support teacher adaptation:
Response: This article focuses on a tension in the design of curriculum materials that I find particularly interesting: how do we make materials adaptable while still maintaining their efficacy? I like the three design principles and agree with them, although I found it frustrating that the article gave little advice about how to operationalize them. In particular, I was frustrated by the lack of clear description of the third principle. I don't know what they mean by grain size or manipulable element, even though I can intuit a general idea of what they may be getting at.
ADDENDUM: Notes on Methods and Measures
Research design: Mixed methods. The researchers state one quantitative research question and four qualitative research questions (categorizing them explicitly).
Data collected: Researchers collected six types of data: a proforma from teachers (to establish a baseline), lesson observations, post-observation interviews with teachers, student questionnaire responses, focus group discussions, and pre- and post-test data.
Measures and Analysis: The researchers describe what each of their data sources were intended to measure. The proforma measured the current manner of teaching mathematics in the school; the lesson observations measured the manageability of lessons, engagement of students, and teaching style of the teachers; the interviews measured teachers' preparation of for teaching the lessons and perceptions of the impact of the lesson on students; the questionnaire measured students' views about mathematics lessons; the focus group discussions further measured students' views of the lessons; and the pre- and post-tests measured student achievement. No specific analysis plan is articulated in the article.
Thoughts: I appreciated the way the authors separated out both their questions and results as qualitative and quantitative. I'm still confused about some aspects of the analysis, but like this as an example of a mixed methods design. UPDATE: This is a case where it is a little ambiguous to me whether it makes sense to talk about measures. Are the researchers measuring manageability of lessons, for example, or describing it? Or both?
9. Drijvers, P., Tacoma, S., Besamusca, T., Doorman, M., & Boon, P. (2013). Digital Resources Inviting Changes in Mid-Adopting Teachers' Practices and Orchestrations. ZDM - International Journal on Mathematics Education, 45(7), 987-1001.
Rationale: The authors point out that the use of technology to support learning is dependent on teachers' ability to exploit the potential of that technology to support learning. Many studies examine use of technology by so-called early adopters, or teachers who are enthusiastic and committed to its use. However, for widespread effective use of technology to take hold, the authors argue, more attention has to be paid to other groups of teachers. This study examines use of technology by mid-adopting teachers, which the authors describe informally as teachers who are willing to experiment with technology but skeptical or hesitant, and formally define as teachers who taught 20 of fewer lessons using technology in the past school year (but still volunteered to participate in this study).
Research Questions: "[1] In which ways do mid-adopting teachers with limited experience in the field of digital resources in mathematics education orchestrate technology-rich activities? [2] How do this repertoire of orchestrations and the corresponding TPACK skills change during participation in collaborative teaching experiments?" (p. 990).
By orchestrate, the authors mean organize and set up resources in a learning environment and make a plan (edited moment by moment) to use them.
Key Findings and Evidence: The key findings were as follows:
Response: I thought the questions asked in this article were useful and interesting, and the analysis seemed sound. However, I'm left wondering what the implications might be. The authors report that after some initial focus on the technology itself, teachers were able to return to much of their normal teaching practice, with heavy use of pedagogical content knowledge and less use of technology-related knowledge. I'm unclear, though, whether this is a good thing or a bad thing. On one hand, if teachers were able to use techniques they are familiar with and existing knowledge, this might support more teachers taking up the use of technology. On the other hand, a return to standard teaching practices might suggest that the potential of technology is not being exploited. I did not feel that the data presented or the authors' commentary shed much light on this tension.
ADDENDUM: Notes on Methods and Measures
Research design: No design is explicitly stated. I would call this an uncategorized qualitative study.
Data collected: Video recording of lessons, video recording of discussions between teachers and researchers, surveys of teacher attitudes in the research meetings, and post-project questionnaires (completed by teachers).
Measures and Analysis: The researchers explicitly map the data sources onto the research questions, saying that the videos of lessons map onto question 1, and the lesson video and both surveys map onto question 2. They measure teachers' orchestrations by developing and using a code book of orchestrations on the lesson video. They measure changes in orchestration by examining the orchestration codes across the course of the experiment. They measure change in teachers' TPACK skills by coding the video data according to teachers' use of different forms of TPACK knowledge and examining how these codes map onto the orchestrations.
Thoughts: The mapping of the data onto the questions was very helpful in terms of identifying what the data was intended to measure.
10. Kalogeria, E., Kynigos, C., & Psycharis, G. (2012). Teachers' designs with the use of digital tools as a means of redefining their relationship with the mathematics curriculum. Teaching Mathematics and its Applications, 31(1), 31-40.
Rationale: The authors claim that in Greece, teachers typically follow print textbooks to the letter, aiming to exactly meet the mandated curriculum and consequently not leaving much room for student exploration. With the intent of helping teachers to shift their pedagogy to focus on meaning-making, the authors presented teachers with what they called scenarios (or sometimes, interestingly, called them "half-baked microworlds"), which are partially-developed programs that need to be fixed in order to accomplish an intended goal, such as drawing a polygon. They sought to explore the affordances of these scenarios for helping teachers develop more innovative lesson plans.
Research questions: Although no research questions are stated explicitly within the article, the reader can infer them to be as follows:
1. What are the main characteristics of the scenarios designed by teachers, and how do these relate to the formal curriculum?
2. How did teachers use their scenarios to promote students' conceptual understanding and meaning-making?
Key Findings and Support: Teachers engaged with some example scenarios and then designed their own scenarios for use in their own classrooms. There were two key findings. First, teachers sometimes used the same digital tools to teach different topics (e.g., used the equilateral triangle tool to address regular polygons and rotation symmetry) and used different approaches to teach the same content (e.g., asking students to construct a hexagon via use of paths with turns and via rotation of an equilateral triangle). The authors claim these varying approaches are in contrast to what they see in typical Greek schools, but no evidence is offered for this. Second, teachers planned for their students to interact with the tools in four different ways: direct manipulation of sliders to change variable values, setting sliders to particular values that "fix" a figure, using sliders to verify the correctness of conjectures about which variable affect a shape, and composing figures via manipulation of the number of iterations of a component shape that will be produced. As evidence, researchers point to examples of each of these tool-use schemes within given example scenarios.
Response: I requested this article from Interlibrary Loan because I thought it was going to be directly relevant to one part of my research interest -- namely, the teacher-curriculum relationship with digital tools and resources. Imagine my surprise when I found out that the digital resources in question made use of the research on the use of programming to teach elementary math, an aspect of my research interests I think of as quite different than the former. I was most struck by the finding that the provision of a digital resource led to such different lesson plans focused on the same topic. With no control group and little background information, I find it hard to give credit for the innovative thinking to the digital tool, but on the other hand, the results do confirm my intuition about such things. I think the role of technology for promoting creative lesson design by teachers deserves further investigation.
ADDENDUM: Notes on Methods and Measures
Research design: The researchers characterize this as in intervention study -- they are examining the effects when preservice teachers are exposed to their "half-baked scenarios".
Data collected: The data was the activities that the preservice teachers developed based on the half-baked scenarios.
Measures and Analysis: To measure the content of the activities, the researchers organized the activity by mathematical concept, and looked for similarities across activities within the same concept. Similarly, to measure how teachers used technology to stimulate student thinking, researchers organized the activities according to the technology features used and how they expected students' thinking to change based on use of those features.
Thoughts: I would not have thought about this as intervention study, so I find it interesting that the researchers categorize it that way. I always think of intervention studies as measuring some kind of pre-post change. Perhaps this is not always so.
11. de Arajo, Z., Otten, S., & Birisci, S. (2017). Teacher-created videos in a flipped mathematics class: digital curriculum materials or lesson enactments? ZDM - Mathematics Education, 49(5), 687-699.
Rationale: The authors point out that although more studies are being published on teachers' use of digital curriculum materials, there have been no studies of teacher-created videos (created for use in a flipped classroom approach) as digital curriculum materials. This study aims to fill that gap. The authors argue that examination of the videos is important because they illustrate one way in which teachers transform a written curriculum when they enact it.
Research questions: The authors explored the following three research questions (p. 690):
Key findings and support: The key findings and evidence presented were as follows:
Reaction: One of the themes I have seen emerging from studies about teachers' relationship with digital curriculum materials (as opposed to traditional print curriculum materials) is that several lines that are clear with print materials are becoming blurred. In other articles, the discussion focused on the blurred line between teacher as developers of curriculum versus teachers as users. This article explores the same issue but frames it in terms of a blurring line between designated and enacted curriculum. I appreciated this new perspective. I have always thought of the designated curriculum as static and the enacted curriculum as dynamic, but this article points out that the former may not always be static when digital resources are involved.
ADDENDUM: Notes on Methods and Measures
Research design: Case study.
Data collected: Data sources included interview with the teacher, classroom video, and an initial survey given to the teacher.
Measures and Analysis: The researcher measured the relationship between the teacher-created videos and the textbook by analyzing each curriculum resource according to framework of three aspects of curriculum materials, then comparing the results. They measured the role of the teacher in the creation and use of the videos by analyzing her interviews according to a framework for curriculum use, corroborating the themes that emerged by looking for examples in lesson video. To measure the role of the videos in classroom interaction, the researchers coded for both use of the videos and reference to the videos every 3-5 seconds within the videotaped lessons.
Thoughts: This article has a very clear data collection and analysis plan. I think I'll probably refer back to it as an example.
12. Moyer-Packenham, P., & Westenskow, A. (2013). Effects of Virtual Manipulatives on Student Achievement and Mathematics Learning. International Journal of Virtual and Personal Learning Environments, 4(3), 35-50.
Rationale: The authors point out that there has been 20 years or so of research on the use and impact of virtual manipulatives in mathematics education, but there are not any meta-analyses of this work. This study aims to fill that gap, thereby adding empirical validity to discussions of the affordances of virtual manipulatives (such as their potential to promote student thinking about relationships between mathematical representations).
Research questions: The research questions are stated as follows (p. 37):
Key Findings and Support: The meta-analysis of effect sizes showed an overall moderate effect size of virtual manipulatives (VMs) in comparison to other treatments. The effects were roughly the same for VMs alone and for VMs combined with physical manipulatives, and the effect of VMs as compared to physical manipulatives was small. The largest effect size by grade band was for high school, followed by elementary school and then middle school. There is little research on the effects of VMs for particular subgroups of students (e.g., students with special needs).
The studies provided empirical evidence of five particular affordances of VMs that contribute to their positive effects:
Response: This article appealed to me on two levels. First, as a long time curriculum developer, I am interested specifically in how to design effective tools and interventions. The authors' list of affordances of VMs read to me like a set of operationalizable design principles, which I found exciting. Second, the article discusses mathematical abstraction in various ways, which is connected to computational thinking. I think this article will be useful in my explorations in how mathematics and computational thinking are connected.
ADDENDUM: Notes on Methods and Measures
Research design: Meta-analysis and follow-up "conceptual analysis." These different designs correspond to the two different research questions.
Data collected: Through a thorough database search followed by a detailed process of excluding articles that did not meet certain requirements (not empirical, threat to validity, etc), the authors identified 66 articles that formed the corpus of data for their study. Of those 66, 32 contained the necessary information to calculate an effect size. All 66 were used in the conceptual analysis.
Measures and Analysis: Effect sizes were calculated as a measure of the impact of virtual manipulatives compared to other instructional treatments. To identify the particular affordances of VMs that seemed connected to student learning, the researchers catalogued and categorized the study authors' explanations of positive results within their studies.
Thoughts: This paper is useful as an example of a comprehensive lit review and meta-analysis. These are two things that are quite different from the largely descriptive collection of researcher I have in this RDP.
13. Duncan, C., Bell, T., & Atlas, J. (2017). What do the Teachers Think?: Introducing Computational Thinking in the Primary School Curriculum. In Proceedings of the Nineteenth Australasian Computing Education Conference (pp. 65-74). New York: ACM.
Rationale: The authors point out that computer science has been introduced into the primary school curriculum in several countries, including New Zealand where this study takes place. This study analyzes the feedback from primary school teachers about professional development and resources that have been provided to them to assist with the implementation of computer science (CS) and computational thinking (CT) concepts in their classrooms. The authors argue that the study is needed in order to understand how supports for teachers and be improved (which will translate, in theory, to better instruction for students).
Research Questions: No research question is explicitly stated, but the work seems to be guided by questions such as:
Key Findings and Support: The data for the study consisted of feedback surveys that the teachers completed after implementing each CS/CT activity. Teachers generally reported moderate to high levels of confidence. They also felt that the lessons were an appropriate level of challenge for their students; the authors argue that this shows that there are CS activities that are appropriate for elementary school. Teachers were pleased with how engaged their students were while completing the CT activities and how well the activities promoted working together in groups effectively.
Although the professional development experiences did not directly address the idea of integrating CS/CT instruction with other subjects, teachers still found many opportunities for such integration. Physical education, mathematics, and language arts were popular avenues for integration.
Lastly, researchers noted a few misconceptions that seemed to be held by the teachers. For example, teachers struggled with some of the terminology (algorithm versus program, bit versus byte).
Response: Most research on CS education is either spread across the full spectrum of K-12 or focused on middle or high school. Thus, I really enjoyed reading about this effort focused particularly on elementary school teachers. The most interesting result for me was the fact that teachers found mathematics a natural place for the integration of CT topics in elementary school, as math+CT integration is my particular interest. The questions that this article raise for me include: What did the math+CT activities look like? Did they lead to better understanding of math, CT, or both? Do teachers see CT as an approach to teaching math, see math as a context for learning CT, or espouse some other viewpoint.
ADDENDUM: Notes on Methods and Measures
Research design: No overarching research design is identified. Instead, the authors use their research design section to discuss the participants, the feedback form they filled out, and when they filled out the form. I suppose this could be considered a weakly designed efficacy study of the professional development teachers received, but I'm reluctant to put that label on it.
Data collected: The data consisted of 48 feedback forms completed by 13 teachers over the course of the study. The form included free response questions about the content of the lessons, as well as Likert-scale questions about teachers' confidence and students' level of challenge.
Measures and Analysis: The Likert-scale questions on the form were used as a self-report measure of teacher confidence and challenge level of the lessons for students. When it came to confidence, the researchers counted the overall number of each level of response and also traced changes in individual teachers' confidence over time. When it came to challenge, all but two of the teachers' responses were that the lessons were an appropriate level of challenge, so researchers couldn't trace change across time. Qualitative analysis of the open-ended responses was used to describe teachers' thoughts about student engagement, collaboration, and integration of CS with other curriculum topics.
Thoughts: This study is unique in my collection for its use of a self-report measure.
14. Hazzan, O., & Zazkis, R. (2005). Reducing Abstraction: The Case of School Mathematics. Educational Studies in Mathematics, 58(1), 101-119.
Rationale: The authors point out that the theoretical lens of "reducing abstraction" has been successfully applied to understand the cognitive processes of advanced mathematics students and computer science students during problem solving. They argue there is a need to see if the same framework can be used to understand the thinking of students engaging in school mathematics. If so, the notion of "reducing abstraction" may reveal itself to be a widely applicable explanatory framework.
Key Findings and Support: The authors examined preexisting studies of student thinking about various school mathematics topics, as well as data from their own work with preservice elementary school teachers. They found examples of reasoning that could be explained by reduction of three types abstraction discussed in the mathematics education literature:
Response: I read this article because it was framed as bringing a shared framework used in mathematics and computer science to bear on school mathematics; this seemed intimately related to my interest in integrating computer science into school mathematics. While I bought the arguments in the article, and do think that students' reasoning processes involve reduction in abstraction, I'm left wondering about the instructional implications. Although the authors never overly claim that students' reductions in abstraction are problematic -- indeed, they are careful to point out that students often obtain correct answers via these means -- there is an underlying implication that these solutions are inelegant. I'm left with these questions: Is abstraction, rather than reduction of abstraction, the overall educational goal? What productive role can reduction of abstraction play in problem solving? Are these thinking patterns (reductions in abstraction) something to be built upon, or avoided?
ADDENDUM: Notes on Methods and Measures
Research design: This piece is described as an attempt to illustrate the applicability of the idea of reducing abstraction in understanding how students cope with abstract concepts in school mathematics. The researchers accomplish this goal through a literature review combined with reporting on some anecdotal evidence from the researchers' practice.
Data collected: The authors reinterpret data presented in six previously published articles. They also use some of their own personal experiences that have not been previously reported.
Measures and Analysis: This article has neither explicit measures nor detailed analysis plans. Rather, it aims to establish the utility of a particular explanatory theory for students' mathematical thinking and behavior.
Thoughts: I suspect that some critics might claim that this is a non-research paper. However, I include it in this section because the authors themselves discuss it as such. There is a specific header called "Research Method".
15. Muller, O., & Haberman, B. (2008). Supporting Abstraction Processes in Problem Solving through Pattern-Oriented Instruction. Computer Science Education, 18(3), 187-212.
Rationale: The authors point out that abstraction is a multi-faceted process that is critical to computer science, and so it is important to find effective ways of helping beginning computer science students to develop abstraction skills. They review a framework that articulates three aspects of computational abstraction (pattern recognition, black-boxing, and structure identification) and explain that this study examined how a particular pedagogical approach, called pattern-oriented instruction (POI), was associated with students' facility with abstraction.
Research Questions: The authors did not explicitly state a research question, but frame their paper as "a study designed to evaluate how POI influences the acquisition of abstraction skills." (p. 194)
Key Findings: The study is a treatment-control comparison, where students in the treatment group took an introductory computer science course organized around computational patterns (POI) -- a pattern in this context is a schema for recognizing a particular kind of problem, such as "check whether any items in a list meet a condition." Students in the control group took a course that covered the same content, but was organized around computational constructs (such as loops and conditional statements). The overall finding was that students in the treatment condition exhibited more advanced abstraction skills. The evidence presented in support of this finding is as follows:
Response: I read this piece not because I'm particularly interested in how to teach a CS1 course to high schoolers (the subjects in the study), but because this RDP, along with my research assistantship, has led me to wonder whether abstraction might be a shared practice through which mathematics and CS might be integrated. I read this article to gain a better understanding of what abstraction means in the context of computer science. I was struck by how "more abstract" meant "less detail in explanations" in the context of this study, and how the less detailed explanations were therefore considered superior. This makes sense when considering that abstraction is about stripping away detail, but throughout the article I found myself thinking that most math educators would probably consider more detailed explanations to illustrate more thorough understanding of concepts. I'm left with this question: When studying abstraction, how do we know whether a brief description is evidence of good abstraction skills versus limited understanding?
ADDENDUM: Notes on Methods and Measures
Research design: I would classify this design as a quasi-experiment. There was an experimental and control group, and the researchers go to some lengths to describe the ways in which these groups were matched and the intervention was isolated as the independent variable. However, based on the information presented in the article, I do not believe the groups of students were randomly assigned. A subset of students were randomly sampled for one of the measures, but this is different than random assignment.
Data collected: Two forms of data were collected: (1) student performance on a questionnaire given as a mid-term, and a categorization assignment given to a randomly selected subset of the students in the study.
Measures and Analysis: The researchers used the following aspects of the data as measures of students' abstraction abilities: (1) descriptions of solutions and categories of problems, (2) identification of similarities among problems in the categorization assignment, and (3) identification of subproblems within a complex problem. Lack of implementation details in (1) was considered evidence of higher levels of abstraction than inclusion of such details. Attention to the main task of the problem and to structural details in (2) was considered evidence of a higher level of abstraction than attention to less important tasks or surface features. Identification of both the main task and nested subtasks in (3) was considered evidence of a higher level of abstraction than some partial version of such identification.
Thoughts: As noted above in my initial response, I was skeptical of lack of detail as a measure of anything meaningful. However, now that I return to this, I must give the researchers credit for looking at abstraction in multiple ways. I'm less skeptical of their overall conclusions than I was initially.
16. Bouck, E., Park, J., Sprick, J., Shurr, J., Bassette, L., & Whorley, A. (2017). Using the virtual-abstract instructional sequence to teach addition of fractions. Research in Developmental Disabilities, 70, 163-174.
Rationale: The authors point out that fractions are a particularly difficult topic for students to learn, and point to the past success of the Concrete-Representational-Abstract (CRA) framework for teaching difficult content to students with mild intellectual disabilities. However, they point to two potential problems with using the CRA approach to teach fractions. First, use of physical manipulatives (in the Concrete phase) can be stigmatizing for students. Second, students find it difficult to draw representations for fractions in the Representational phase. Thus, the authors propose testing a Virtual-Abstract sequence for teaching fractions to students with mild intellectual disabilities, wherein virtual manipulatives are used instead of physical manipulatives, and the Representational phase is skipped.
Research Questions: From p. 165: "(a) To what extent do middle school students with mild intellectual disability improve their performance in adding fractions with unlike denominators via the VA instructional sequence?; (b) To what extent do middle school students with mild intellectual disability generalize their performance solving addition of fractions with unlike denominators when no instruction is provided?; and (c) What is the perception of middle school students with mild intellectual disability regarding the VA instructional sequence?"
Key Findings: The intervention improved three out of the four students' performances on the fraction addition tasks. (The authors describe this by saying there was a functional relation between the intervention and students' performance.) For the fourth student, the manipulative seemed to help him create equivalent fractions, but he was unable to successfully move into the abstract phase without more time with the app.
Response: I like this article as an example of how abstract understanding of mathematics is perceived as the end goal of instruction.
ADDENDUM: Notes on Methods and Measures
Research design: Experimental design called a "multiple probe across participants" design. The four students in the study all started the baseline phase simultaneously, but they began the treatment phase at different times. Although this is not explicitly stated, I presume that the reason for this is to isolate the effects of the treatment from the rest of the students' school experiences, and avoid the ethical concern of denying the treatment to some of the students.
Data collected: The article has this line: "Data were collected via event recording." I am not entirely sure what this means. Based on the rest of the text of the article, I believe this refers to the fact that the researchers collected students' written work.
Measures and analysis: The Virtual-Abstract instruction sequence was the independent variable, and percent accuracy on the independent practice portion of lessons focused on addition and subtraction of fractions with unlike denominators was the dependent variable. The latter was a measure of student learning.
Thoughts: This is an interesting experimental design. I have seen examples of ABAB designs, but never this kind of change in timing across subjects.
17. Statter, D., & Armoni, M. (2016). Teaching Abstract Thinking in Introduction to Computer Science for 7th Graders. In Proceedings of the 11th Workshop in Primary and Secondary Computing Education - WiPSCE '17 (pp. 80-83). New York: ACM.
Rationale: The authors point out that abstraction is a key skill in computer science. There is a debate over how to teach it however, and whether it is teachable at all. The authors conducted this study to test whether an explicit approach to teaching abstraction was effective for second graders.
Research Questions: From p. 81:
"1. How did the intervention affect the students’ ability to understand the need for using a verbal description of the algorithm before writing a program?
2. How did the intervention affect the students’ ability to add quality documentation in their scripts?"
Key Findings: Students who experienced the intervention were significantly more likely to write a verbal description of their solutions on the post-test, and also significantly more likely to give and answer that was only a verbal description. Students who experienced the intervention also tended to write more documentation in their scripts. All of these results were interpreted as indicating that the treatment group was more skilled at abstraction than the control group.
Response: The most interesting aspect of this article, in my opinion was the fact that the authors present it as proof that abstraction is not only teachable, but teachable to students as young as 7th graders. I've read a good amount of literature about abstraction in recent weeks as part of my RA position, and I was surprised and rather dismayed to find that there are arguments that abstraction is not teachable and tests of abstraction should be used as a screening mechanism for those entering the CS field. That seems elitist and contrary to CS For All, so I am glad to see that this piece pushes against that narrative.
ADDENDUM: Notes on Methods and Measures
Research Design: This was a mixed-methods quasi-experiment. The researchers compared the abstraction skills of one set of classes that used a particular textbook for teaching CS, and another set of classrooms that used the same textbook and also a particular framework for teaching abstraction. It was a treatment-control study without randomization.
Data Collected: The data included pre- and post-tests of each group, the final projects of each student, and interviews and observations. The first two data sources were quantitative. The latter two were qualitative.
Measures and Analysis: To measure abstraction skills, the authors counted the number of times that students wrote verbal descriptions of their solutions on the post-tests, and how many times they wrote only verbal descriptions. They compared means of the treatment and control groups. They also looked for evidence of abstraction in the interview transcripts (though no explicit indication of how they defined such evidence). Lastly, the authors also used use of documentation as a measure of abstraction skills, arguing that thinking of an algorithm behind a script will naturally lead to more use of documentation.
Thoughts: I am not sure that I buy the argument that use of documentation is a measure of abstraction skills. However, I appreciate that the authors provided a particular rationale for their choice for me to critique.
18. Statter, D., & Armoni, M. (2017). Learning Abstraction in Computer Science: A Gender Perspective. In Proceedings of the 12th Workshop on Primary and Secondary Computing Education (pp. 5-14). New York: ACM.
Rationale: The authors point out that abstraction has been identified as an important CS skill worthy of early treatment. As they conducted a study of a particular framework for teaching abstraction, they noticed some interesting trends related to gender. They explored those trends in this study.
Research Questions: From p. 7:
"1. Do boys and girls that study an introductory CS course in 7th grade differ regarding their CS abstraction skills?
2. Is the impact of Armoni's framework for teaching abstraction in CS, when integrated into an introductory CS course for 7th grade, different for boys and girls?"
Key Findings: On two of the six measures of abstraction (use of verbal descriptions and understanding of initialization), girls were found to outperform boys regardless of treatment group, although the girls in the treatment group had a particularly strong advantage over the boys. In the discussion, the authors speculate that the results could be related to previously-established results suggesting girls have stronger verbal abilities than boys.
Response: I love this study. It has the potential for impact in multiple ways. It shows that abstraction does indeed seem to be related to overall CS knowledge. It shows that abstraction is indeed teachable, and that it is teachable to students as young as 7th grade. It suggests that girls, and underrepresented group in CS, may actually have advantages over boys under certain conditions, with has the potential for broadening participation. It even connects CS to verbal abilities, which has the potential of shifting CS being seeing as a widely applicable subject instead of only a STEM-oriented one. I think this will be the subject of my video.
ADDENDUM: Notes on Methods and Measures
Research Design: This study was a mixed-methods quasi-experiment. (This is additional data analysis from the study described in annotation 17.)
Data Collected: The researchers administered pre- and post-tests to all students in the study, and also conducted interviews with a subset of students.
Measures and Analysis: The authors describe six distinct measures for abstraction abilities: Writing verbal descriptions, writing only verbal descriptions, explaining solutions, using black box arguments, initialization, and reference to level of abstraction. All of the measures were related to the conceptual framework for abstraction discussed earlier in the paper. The measures were embedded into a pre- and post-test taken by the participants, and ANOVA tests were used to compare means across various groups. The interview data was also coded with these six abstraction ideas as the lens.
Thoughts: I really liked the multifaceted and detailed description for their measures in this study! Although I did not find the argument for every measure convincing, I did think overall they did a good job of operationalizing a slippery concept.
1. Hoyles, C., & Noss, R. (2003). What can digital technologies take from and bring to research in mathematics education?. In A. J. Bishop, M. A. Clements, C. Keitel, J. Kilpatrick, & F. K. S. Leung. (Eds.) Second international handbook of mathematics education (pp. 323-349). Springer Netherlands.
Main Arguments and Support: Hoyles and Noss conducted a literature review with attention to research with digital tools that somehow make mathematical thinking of users visible, classifying those tools as either programmable, as in the case of microworlds, or expressive, as in the case of dynamic geometry systems. Their main arguments, along with the support they provide for them, are as follows:
Response: I chose this review as a starting point for investigating the first of my research questions, which is "How do students think about mathematics concepts when engaging with dynamic representations and manipulatives?" The article served as a useful primer on this, particularly due to its careful attention to explaining how tools shape learners' thinking. I feel that I have a better understanding of what the particular issues to be explored are, though I find myself wishing for more attention to how these issues have been studied. Following back some of the citations in the article might help with this.
I was pleasantly surprised by another aspect of the article, which is the ways in which it got me thinking about the overlap among my three research interests (digital tool use by students, how tech can influence curriculum adaptations made by teachers, and the relationship between computational thinking and mathematics). I originally thought of this article as only as relating to the first issue. However, it highlighted the critical role of the teacher in mediating tool use, bringing in my second interest, and characterized tools along a continuum from "black boxes" to open and programmable, arguing in particular for more programmability in tools. This brings in my third interest: the relationship of mathematics to CT. All in all, this was a great first read and has already helped me to see my three research interests as less isolated.
2. Remillard, J. (2005). Examining key concepts in research on teachers' use of mathematics curricula. Review of Educational Research, 75(2), 211-246.
Main Arguments and Support: The author reviews work on teachers' use of mathematics curriculum materials in order to develop a framework for studying teachers interactions with, rather than simply use of curriculum materials. To form this framework, she reviewed three bodies of research on (1) curriculum use, (2) teaching, and (3) curriculum materials. Her main arguments are as follows:
Response: I was particularly struck by Remillard's two-stage curriculum development process. For most of my career, I was involved in the first stage and did not view implementation as a stage of design. I think conceptualizing curriculum materials as only "half-designed", so to speak, has a lot of power for influencing my thinking in terms of how to make curriculum materials more flexible.
3. Drake, C., Land, T., & Tyminski, A. (2014). Using Educative Curriculum Materials to Support the Development of Prospective Teachers' Knowledge. Educational Researcher, 43(3), 154-162.
Main Arguments and Support: The main argument the authors present in this article is that educative curriculum materials (in this case, for elementary mathematics) should be used as a tools in pre-service teacher education programs to support pre-service teachers' development of the funds of knowledge that are needed for teaching (such as Shulman's curricular knowledge and Ball's mathematical knowledge for teaching). They outline the argument as follows:
The authors go on to present five design principles for how to use educative curriculum materials in teacher education: (1) Orient pre-service teachers to read educatively; (2) Provide lenses through which pre-service teachers can interpret the materials; (3) Provide scaffolds for reading the materials; (4) Have pre-service teachers engage with sequences of lessons rather than isolated lessons; and (5) Have pre-service teachers compare and contrast curricula.
Response: Despite the fact that I am familiar with the curriculum enactment research that shows teachers attend to different aspects of curricula according to their backgrounds and other factors, I have never considered before that these differences would be along a continuum of reading the materials in an educative way or not. This is a really interesting change in perspective for me. One of my interests is in finding ways to improve educative curriculum materials, and while I already knew that at some point curriculum developers have no control over the ways their materials are used, it never occurred to me before that teacher ed programs might be a way to influence their use.
4. Pepin, B., Gueudet, G., & Trouche, L. (2017). Refining Teacher Design Capacity: Mathematics Teachers' Interactions with Digital Curriculum Resources. ZDM - International Journal of Mathematics Education, 49(5), 799-812.
Main Arguments and Support: The authors make two key claims, one about the nature of teachers' design capacity, and a second and third about how access to digital resources changes teachers' design processes and provides new opportunities for developing teachers' design capacity, respectively. The three arguments and supporting evidence are as follows:
Response: Although I do think the authors have identified some key elements of teachers' design experiences that change via access to digital resources, I think this article only scratched the surface on what such changes might be. The digital resources in question here were mostly related to online communication and collaboration, or availability of additional material on the Internet. From what I was able to tell, the lessons as delivered to students did not involve dynamic graphs, manipulable representations, or any other particular technological affordances. Consideration of these kinds of elements have the potential to change teachers' design work even further than what is described here. This does not invalidate anything presented in this article; it simple makes me curious to see how these ideas could be pushed further.
5. Choppin, J., Carson, C., Borys, Z., Cerosaletti, C., & Gillis, R. (2014). A Typology for Analyzing Digital Curricula in Mathematics Education. International Journal of Education in Mathematics, Science and Technology, 2(1), 11-25.
Main Argument and Support: The authors present an analysis of six digital curriculum programs, using the results to argue that most available programs are not taking advantage of the potential of technology to support learning. The support they offer for this claim is as follows:
Response: Although I was hoping for a different result, this analysis of available curricula did not surprise me. The lack of materials that push the boundaries of what is possible is a large part of what made me want to come to graduate school. Since I have been here, I have read a handful of articles that discuss programs that do seem to push the boundaries a bit, though, which made me consider why none of those ended up in Choppin and colleague's analysis. Looking at the authorship of the articles I've read for this RDP, I'm suddenly struck that most of the digital curriculum work seems to be happening in Europe (Drijvers, Pepin, Gueudet).
6. Pepin, B., Choppin, J., Ruthven, K., & Sinclair, N. (2017). Digital curriculum resources in mathematics education: foundations for change. ZDM - International Journal of Mathematics Education, 49(5), 645-661.
Main arguments and support: In this introduction to special issue on digital curriculum resources (DCR), the authors review research on teachers' and students' use of DCR, especially as related to their use of traditional curriculum resources. Their main arguments are as follows:
Response: I appreciated this article for the different frames it gave me for thinking about this kind of research. The four types of DCR, for example, were each linked to an example I am familiar with and drew my attention to differences between them I may not have specifically attended to before. I think my interest lies, in particular, with interactive textbooks.