Research about how people learn can become very complex, very quickly. Just about everything the researcher wants to learn about about comes embedded in learners' and educators' real-world personal histories and attitudes, in classroom, school, and community contexts, and more.
I built the BeeSpace Education Research Analysis Platform (BERAP) as a tool for managing data about teaching and learning, and for developing research understandings from the dataset. BERAP took shape from 2005 to 2008 and offered crucial support for a study of learning that became both a key output of a multi-year, multi-million-dollar scientific research project, and the focus of my own doctoral thesis research.
As full-time coordinator for a National Science Foundation-funded project at a major research university over a five-year period, I handled two sets of responsibilities. The first was to provide key administrative support for the project’s research scholars, who directed the project as a whole – helping to manage budgets, planning meetings, provisioning offices and labs, and so forth. The second major set of responsibilities made use of my skills as a post-coursework PhD student in educational psychology, and positioned me as manager and researcher for the project’s educational outreach to middle and high school students. These outreach efforts became not only a key output for the overall project, but also the topic of my own dissertation research.
Research about how people learn can become very complex, very quickly. In part, that’s because just about everything a researcher wants to know about comes embedded in real-world contexts. Students aren’t just grades and test scores – they bring personal, social, and cultural histories to their studying. Their ways of learning relate not only to individual capabilities, but to their interests and attitudes. They don’t learn only by themselves, but in classrooms, with particular teachers and classmates. Those classrooms and teachers are located in particular schools and communities, with particular kinds and levels of social support, and so on.
Certainly, some kinds of quantitative studies are strictly constrained; they might for instance assign students randomly to an experimental-style or control-style classroom condition, in order to set up claims that differences in post-test scores can be attributed solely to a unit of classroom instruction. But for qualitative research that aims to account for rather than control for real-world differences, and for design-based, iterative research that aims to develop and understand curricula and technologies for enhancing teaching and learning, the research design might involve a multiplicity of sources and kinds of data, and multiple means and levels of analysis. Crucial challenges are to collect all that information in carefully chosen and documented ways, to build enduring archives that contain only the definitive versions of that data, and to construct connecting scaffolds across the data in those archives.
A digital tool that manages all this effectively will provide research scholars with a vantage point that lets them engage authentically with research content, to build theoretical and highly practical understandings. The tool that I developed to manage this was the BeeSpace Education Research Analysis Platform.
Qualitative data analysis (QDA) software has a brief but productive history as a key tool for researchers in education, sociology, anthropology, and related academic disciplines. Chiefly, it offers a means for researchers to select stretches of text within and across documents, and to apply their own codes to those text selections. Once codes are applied, QDA software offers an interface for searching and sorting across the data sources that are part of the dataset the researchers have constructed. Initially, QDA was associated closely with a research methodology called grounded theory, which utilizes codes as the basis for building up qualitative theoretical understandings. It is now commonly used for studies involving context analysis, discourse analysis, and design-based research. The earliest versions of QDA tools, dating from the 1970s, operated only on plain-text sources, but current versions can be used for tagging fully formatted text, images, spreadsheet cells, and time-linked videos and transcripts. Codes can be linked and mapped, queries can be saved and applied, and field notes and annotations can be added to sources and coded sections. At the time I built BERAP a decade ago, most QDA software ran only on Windows computers. Wikipedia now lists more than a dozen currently maintained QDA software packages, most of them commercial and proprietary; most run on Windows, Macintosh, and/or Linux computers, and some also offer web-based and/or portable OS-based versions.
At the time I built BERAP, the best-supported commercial QDA packages included NVivo, Atlas.ti and MaxQDA, all of which are still available and continue to be improved by their developers. I chose MaxQDA as the platform for BERAP, because of the combination of features, dependability, and ease of use it offered. These days, any of the current commercially available tools would be a solid basis for developing a BERAP-like resource.
MaxQDA imposes a strong set of visual conventions on its dataset that is consistent with the look of QDA tools in general. Top-level navigation menus and icons link to tools for creating and selecting Projects, adding Documents, creating sets of Codes, and supporting various analyses and reports.
Individual windows on the screen include a Document System tree, a Document Browser in which a selected item from the Document System can be viewed and coded, a Code System that shows the entire set of codes and enables selecting a particular code, and a Coded Segments window that enables the researcher to view the content of individual segments that match the Code System selection.
The overall screen is feature-rich and quite busy-looking. A good deal of user training and familiarization are needed to add documents to the system, to develop and apply codes, and to use the various windows effectively for exploring the data and developing research analyses.
Throughout the BeeSpace study, its lead project researchers were insistent that educational outreach ought to connect with the leading-edge scientific research that the project was carrying out, and with the involved scientists themselves. This led to trying out several different kinds of educational outreach over the project’s five-year life, and subsequently to using each instance of planning, instructing, and reflecting on lessons learned as a starting point for building the next version of outreach. The sixth and final iteration of outreach took the form of a 20-hour, week-long summer program for a dozen high school students, in which they progressed from learning about honeybees as a social insect and agricultural resource, to understanding and appreciating how research scientists studied them to learn about genomic bases for complex social behaviors in insects, higher animals, and man.
As each instance of educational outreach took shape, traces of both student and project learning were collected, considered, and interpreted. These ranged from text-based and visual curricular resources, to students’ work, to interviews with teachers, students and curriculum developers before, during and after each outreach instance, to recordings of the educational interactions themselves.
Consistent with the use of QDA software in grounded theory methodology, I developed BERAP to construct an archive of educational research data, develop codes to identify emerging themes from the data, and share insights about our learnings from the outreach instances with fellow BeeSpace researchers and with the larger scientific community in conference presentations and papers. Together, we discovered and identified trajectories of both student and project learning that carried both within and across the instances of outreach.
As the developer and expert user of BERAP, I came to recognize its power for managing the outreach project’s research dataset and developing conceptual understandings from it. At the same time, I came to understand that this was to be a tool for personal use as a researcher, rather than a sharable resource whose full use I could teach to others on the project team. Moreover, I realized that concern for protecting research participants’ privacy, a key consideration in any academic research, would need to be balanced against ways the sharing of findings from the dataset. In addition, I recognized that the time demands of the coding, retrieving, and modeling processes in QDA software had to be balanced against the quality of insights that the process could result in.
At this point in development of BERAP, I recognized its utility both for collecting an archiving key research data, and for coding it in ways that afforded conceptually rich research. I had the beginnings of a codebook that I could use for new data as I continued to develop the dataset for the various instances of project outreach, and the means for extending and refining the codebook to account for new understandings and insights as they emerged.
This led me to refine the research so as to focus on two key questions: how did the project learn from its multiple instances of outreach development and delivery, and how did students learn from a particular outreach instance?
With these developing understandings in mind, I came to regard BERAP as a tool for exploring the key questions of student learning and project learning, and for building and modeling insights on the basis of those explorations.
In effect, these separate questions led to two separate, overlapping studies in the dissertation research, each of them supported by BERAP. One research direction resulted in identifying six different phases or instances of BeeSpace educational outreach research, and describing their qualities in terms of others’ prior research into science education outreach. I came to recognize that in its final form, BeeSpace education initiatives could best be described not as “outreach,” in fact, but as “pull-in”; that is, the overarching concern of the scientists in charge of the project was not so much to reach out to school-age students and share simple, high-interest science related to the study, but instead to work toward “pulling” students into the ways in which the scientists themselves viewed their work. This had implications for what kinds of outreach the scientists considered to be valuable or not, for the academic preparedness of students they endeavored to involve, for their readiness to open their research facilities in varied ways, and for the kinds of curriculum they were interested in putting effort into. BERAP was a key tool for exploring this research direction.
The second research direction utilized BERAP to explore connections between teaching and learning, and between concept and execution, from the last of the six outreach efforts, the BeeSpace Summer 2008 Education Week (BSEW08). BERAP-supported exploration yielded a set of insights about teaching and learning that centered on seven “aspects of meaningfulness” in which BSEW08 influenced students’ learning; namely, through attention to what I came to call scientific authenticity, learning trajectories, coherence of experience, necessary simplifications, connecting worlds, importance of setting, and importance of set of learners. Data explorations in BERAP identified strong linkages for all seven of these areas between the project leaders’ intended workshop design, and the ways in which the students themselves reported learning from the course. Additional exploration of the themes of meaningfulness and the seven identified aspects came from the literature of science education, and from a close look at the actual BSEW08 curriculum. The latter was afforded through a separate resource which I also developed, the BSEW08 online curriculum, the focus of another entry in my professional portfolio.
As a QDA tool, BERAP proved invaluable for managing, exploring and analyzing highly varied qualitative data from a multi-part study of science teaching and learning. Its uses included archiving research data definitively; exploring that data iteratively for emerging themes; tracing those themes across instances of outreach; and, for a particular outreach instance, developing a model of science education that connects teaching goals with student outcomes. BERAP lent itself both to intensive exploration of the data itself, and to drawing connections between identified themes and the corpus of science education research.
At the same time, BERAP took shape as a tool for expert use – the expert, in fact, being me. There were several reasons for this. First, in my positioning both as the project coordinator and the education researcher, I was the primary collector and archivist for definitive versions of project data, ensuring that only final-form versions of curricular documents, meeting agendas and notes, interview transcripts, and presentation videos made it into the authoritative dataset; in fact, deciding what is to be included-in, versus excluded-out, of a developing dataset must be accomplished before connecting and analyzing can begin, and the process can at times be surprisingly messy. Second, the positioning of the research as a dissertation study required that I take ownership of the key research directions, analyses, and drafting, as well as the final write-up. This did not make me a lone eagle – in fact, the quality of the final study depended crucially on my collaborating closely with project leads, curriculum developers, workshop educators, and students’ classroom teachers, among others; in addition, being in a research academic environment put me in contact with some of the world’s leading science educators and educational theorists, beginning with my thesis research director, and I took care to consult with them closely and regularly. Third, effective use of QDA applications like MaxQDA requires thorough familiarity with both interface and data, as they typically prioritize richness of features higher than ease of use. The imperative in academic research to treat information gathered from human subjects, including data about learning, as being confidential to the research team also contributed to use of BERAP as a tool for my expert use. Even so, the ability of QDA software to afford the retracing of one’s steps as a researcher, and to share reasoning and provide direct warrants for my interpretations of the data with fellow researchers, is a crucial affordance; this in fact may be one of its key contributions to improving the methodologies of both qualitative case study research and mixed-methods, iterative design-based research. My dissertation research blended both qualitative and design-based methodologies in what I termed a design-oriented case-study approach, and BERAP contributed mightily to its success.
BERAP took shape in tight integration with the BeeSpace project and my own thesis research. Although the BERAP dataset could be revisited as part of future research studies, the fact that BeeSpace existed only for the five years of its project funding makes it less than likely for this to happen, than if BeeSpace were a longstanding program of research. Still, the enduring value of BERAP is two-fold: its use has yielded a set of understandings about learning that can be explored further in future research into effective science outreach by leading-edge research projects; and BERAP itself demonstrates that QDA software can contribute usefully to data management and analysis in iteratively conducted, design-oriented educational research. I have explored and advanced its use in subsequent research initiatives, such as with the US Department of Education-supported study of Reading and Evidence-based Argumentation in the Disciplines (READi) at Northwestern University.